On 11/29/2016 01:15 AM, Thomas Güttler wrote:
>
>
> Am 28.11.2016 um 16:01 schrieb Adrian Klaver:
>> On 11/28/2016 06:28 AM, Thomas Güttler wrote:
>>> Hi,
>>>
>>> PostgreSQL is rock solid and one of the most reliable parts of our
>>> toolchain.
>>>
>>> Thank you
>>>
>>> Up to now, we don't store files in PostgreSQL.
>>>
>>> I was told, that you must not do this .... But this was 20 years ago.
>>>
>>>
>>> I have 2.3TBytes of files. File count is 17M
>>>
>>> Up to now we use rsync (via rsnapshot) to backup our data.
>>>
>>> But it takes longer and longer for rsync to detect
>>> the changes. Rsync checks many files. But daily only
>>> very few files really change. More than 99.9% don't.
>>
>> Are you rsyncing over all the files at one time?
>
> Yes, we rsyncing every night.
>
>> Or do break it down into segments over the day?
>
> No, up to now it is one rsync run.
Unless everything is in a single directory, it would seem you could
break this down into smaller jobs that are spread over the day.
>
>> The closest I remember is Bacula:
>>
>> http://blog.bacula.org/documentation/documentation/
>>
>> It uses a hybrid solution where the files are stored on a file server
>> and data about the files is stored in a database.
>> Postgres is one of the database backends it can work with.
>
> I heard of Bacula, but I was not aware of the fact, that they can use
> postfres for the meta data.
>
>>>
>>> I have the hope, that it would be easier to backup only the files which
>>> changed.
>>
>> Backup to where and how?
>> Are you thinking of using replication?
>
> No, replication is not the current issue. Plain old backup is my current
> issue.
>
> Backup where and how? ... That's what this question is about :-)
>
--
Adrian Klaver
adrian.klaver@aklaver.com