Here I need to know from you
a) Is there any better way to do this other than the above?
That's more complicated than you need to make it. What I do is something like this (in a shell):
pg_dumpall -h host1 -p 5432 | psql -h host2 -p 5432 template1
and then delete the data as necessary via psql afterward.
You may need to tweak that command based on your authentication scheme or substitute 5432 for the port the server is listening on and to specify the proper hosts.
b) Do I need to stop all the applications which are accessing this live Production database? (Should I need to take an outage of my complete system)
For testing, no, pg_dump takes a snapshot of the database and it doesn't add a whole lot of overhead (at least not when I do it). You may notice some slowdown, but nothing that would truly affect performance.
When you actually want to switch to the new version, probably. You'll have to figure out some way to make sure there are no updates on the database before dump/restoring if you're concerned about making sure data is consistent. There will be downtime in this case. You can either disable outside connections via pg_hba.conf or move the server to a different port. You should plan it out for when the time comes.
c) Are there any core area where I need to take care of these activities?
I don't understand the question.
d) I had 10 lacs of records on "pgsql 7.4.2". We are using "SunOS 5.9" version of Solaris machine. How much time it can take to take a dump of all the data.
That depends greatly on your hardware and your database design. For me, dump/restoring 10 GB of data (which becomes 50 GB once all the indexes are created) to decent but modest hardware takes about 4 hours.
Please advice me to carry on this process successfully and safely with out any conflicts. Thanks in advance.
Test, figure out how to do things, plan, manage any possible downtime. That's the process in a nutshell.