Re: AIX support - Mailing list pgsql-hackers

From Tom Lane
Subject Re: AIX support
Date
Msg-id 399291.1769998688@sss.pgh.pa.us
Whole thread Raw
In response to Re: AIX support  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: AIX support
List pgsql-hackers
I wrote:
> * s_lock.h changes: it's not okay to change anything affecting non-AIX
> I've done a little research on this topic, which I'll include in a
> separate message.

So, as previously mentioned, the problem with what the v11 patch
does in s_lock.h is that it changes the implementation of spinlocks
for every PPC platform not only AIX.  We would accept that given
proof that __sync_lock_test_and_set() is as good or better than the
existing hand-rolled assembler on every PPC platform.  But you've
offered no such proof and it seems like sufficient evidence would be
pretty hard to come by.  I think you're far better off not changing
this code for non-AIX in the initial commit.  We can always revisit
the topic later at leisure.

There are a couple of ways you could do that.  An easy answer is
some #ifdefs:

 static __inline__ int
 tas(volatile slock_t *lock)
 {
+#ifndef _AIX
     slock_t _t;
     int _res;
...
 :    "memory", "cc");
     return _res;
+#else
+    return __sync_lock_test_and_set(lock, 1);
+endif
 }

However, I wonder whether you've even proven that
__sync_lock_test_and_set is a win on AIX.  On the same principle
of "get it working again first and optimize later", it might be
better to resurrect the longstanding asm implementation and then
later look into whether the compiler intrinsic improves matters.

One way to do that is to undo what Heikki did in 0b16bb877, and
put back the hard-wired branch offsets I introduced in c41a1215f.
But I totally sympathize with Heikki's edits: those hard-wired
offsets were hard to read and harder to maintain.  Googling led
me to another answer: we can use gcc's "%=" insn counter to
generate normal assembler labels that are distinct across
multiple uses of TAS().  I don't know why we didn't choose
this approach in 2015, other than that I didn't know of that
feature personally.  Some archaeology shows that gcc has had
"%=" since gcc 2.0, so it's hardly too new.

Hence, I suggest the attached alternative solution, at least for
the first pass at this.  I've checked that this works on both AIX
(cfarm119) and NetBSD/ppc (mamba's host).

            regards, tom lane

diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h
index 2522cae0c31..d6c16bce31e 100644
--- a/src/include/storage/s_lock.h
+++ b/src/include/storage/s_lock.h
@@ -411,6 +411,11 @@ typedef unsigned int slock_t;
  * an isync is a sufficient synchronization barrier after a lwarx/stwcx loop.
  * But if the spinlock is in ordinary memory, we can use lwsync instead for
  * better performance.
+ *
+ * Ordinarily, we'd code the branches here using GNU-style local symbols, that
+ * is "1f" referencing "1:" and so on.  But some people run gcc on AIX with
+ * IBM's assembler as backend, and IBM's assembler doesn't do local symbols.
+ * Instead, use %= to make the symbols unique.
  */
 static __inline__ int
 tas(volatile slock_t *lock)
@@ -421,17 +426,17 @@ tas(volatile slock_t *lock)
     __asm__ __volatile__(
 "    lwarx   %0,0,%3,1    \n"
 "    cmpwi   %0,0        \n"
-"    bne     1f            \n"
+"    bne     TAS_fail%=    \n"
 "    addi    %0,%0,1        \n"
 "    stwcx.  %0,0,%3        \n"
-"    beq     2f            \n"
-"1: \n"
+"    beq     TAS_ok%=    \n"
+"TAS_fail%=: \n"
 "    li      %1,1        \n"
-"    b       3f            \n"
-"2: \n"
+"    b       TAS_out%=    \n"
+"TAS_ok%=: \n"
 "    lwsync                \n"
 "    li      %1,0        \n"
-"3: \n"
+"TAS_out%=: \n"
 :    "=&b"(_t), "=r"(_res), "+m"(*lock)
 :    "r"(lock)
 :    "memory", "cc");

pgsql-hackers by date:

Previous
From: Xuneng Zhou
Date:
Subject: Re: BUG: Cascading standby fails to reconnect after falling back to archive recovery
Next
From: Chao Li
Date:
Subject: Re: relkind as an enum