The Java JNI headers of course know this, so they type jlong as 'long long', while jint they type as 'long' - curiously, because they could just call it int and get the same width. Maybe a habit from a 16-bit C environment?
They should be using the (u)int(nn)_t typedefs like int64_t, but some compilers lag in their support for them.
Have issues like this been dealt with in PostgreSQL code before, and did a favorite approach emerge?