Compact fails

Started by sinus, December 17, 2014, 08:12:15 AM

Previous topic - Next topic

sinus

My db has about 8.2 GB.
And 170'000 files.

Diagnostics are fine, IMatch shows no error.

But when I then want to compact the db with the command "Compact and Optimize", IMatch displays an error with a window "Compact fails, not enough space on harddisk" (tried several times).

But my harddisk has over 440 GB free space, also the harddisk seems to have no errors.

If I create a new IM-Database, with only a few images, then the command works fast and normal.

Here are some lines of the log with the E>, has somebody an idea?

12.17 07:58:18+  858 [165C] 10  I>     ++ Memory used: 930'440 / 168'156'648
12.17 07:58:18+    0 [165C] 10  I>     ++ Page Cache used: 16'555 / 17'119
12.17 07:58:18+    0 [165C] 10  I>     ++ Page Cache overflow: 0 / 0
12.17 07:58:18+    0 [165C] 10  I>     ++ Page Cache maximum page size: 4'096 / 4'248
12.17 07:58:18+    0 [165C] 10  I>     ++ Cache used: 69'424'620 / 0
12.17 07:58:18+    0 [165C] 10  I>     ++ Statement Cache used: 0 / 0
12.17 07:58:18+    0 [165C] 10  I>     ++ Cache Hits: 1'274'666 / 0
12.17 07:58:18+    0 [165C] 10  I>     ++ Cache Misses: 155'075 / 0
12.17 07:58:18+    0 [165C] 10  I>     ++ Cache Writes: 935 / 0
12.17 07:58:18+    0 [165C] 10  M>     <  4 [858ms] CIMSQLite::Close
12.17 07:58:18+    0 [165C] 10  M>     >  4 PTFileSemaphore::UnlockFile  'PTFileSemaphore.cpp(134)'
12.17 07:58:18+    0 [165C] 10  M>     <  4 PTFileSemaphore::UnlockFile
12.17 07:58:18+    0 [165C] 00  S>    #STS#: "engine.uptime" 0 0 0.00 "00:02:33"
12.17 07:58:18+    0 [165C] 10  M>     >  4 PTFileSystemMonitor::Enable  'PTFileSystemMonitor.cpp(756)'
12.17 07:58:18+    0 [165C] 10  M>     <  4 PTFileSystemMonitor::Enable
12.17 07:58:18+    0 [165C] 00  M>    <  3 [2137ms] CIMEngine5::Close
12.17 07:58:18+   47 [165C] 00  M>   <  2 [2949ms] CMainFrame::CloseDatabase
12.17 07:58:19+  125 [165C] 05  M>   >  2 CIMSQLite::Vacuum  'IMSQLite.cpp(3416)'
12.17 07:58:20+ 1872 [165C] 05  M>   <  2 [1872ms] CIMSQLite::Vacuum
12.17 07:58:20+    0 [165C] 10  M>   >  2 CIMResManager::TaskDialog  'IMResManager.cpp(1005)'
12.17 07:58:20+    0 [165C] 10  I>   PTR_MSG_DB_COMPACT_FAILED
12.17 07:58:22+ 1872 [165C] 10  M>   <  2 [1872ms] CIMResManager::TaskDialog
[b]12.17 07:58:22+    0 [165C] 00  E>  database or disk is full  'MainFrm.cpp(3599)'[/b]
12.17 07:58:22+    0 [165C] 00  M>   >  2 CMainFrame::LoadDatabase  'MainFrm.cpp(4747)'
12.17 07:58:22+    0 [165C] 00  I>   # Process Memory Info: WSC: 654MB, WSP: 829MB (NEW PEAK), PF: 1300456
12.17 07:58:22+   31 [165C] 00  M>    >  3 CMainFrame::CloseDatabase  'MainFrm.cpp(4989)'
12.17 07:58:22+    0 [165C] 10  M>     >  4 CMainFrame::StopScript  'MainFrm.cpp(5634)'
12.17 07:58:22+    0 [165C] 10  M>     <  4 CMainFrame::StopScript
12.17 07:58:22+    0 [165C] 10  M>     >  4 PTClipboardManager::Clear  'PTClipboardManager.cpp(59)'
12.17 07:58:22+    0 [165C] 10  M>     <  4 PTClipboardManager::Clear
12.17 07:58:22+    0 [165C] 10  M>     >  4 CMainFrame::SwitchViews  'MainFrm.cpp(3208)'
12.17 07:58:22+   15 [165C] 10  M>      >  5 CMainFrame::DestroyDBView  'MainFrm.cpp(3185)'
12.17 07:58:22+    0 [165C] 10  M>      <  5 CMainFrame::DestroyDBView
12.17 07:58:22+   16 [165C] 10  M>     <  4 [31ms] CMainFrame::SwitchViews
12.17 07:58:22+   16 [165C] 05  M>     >  4 CIMatchApp::EnableDBScript  'IMatch.cpp(443)'
12.17 07:58:22+    0 [165C] 05  M>     <  4 CIMatchApp::EnableDBScript
12.17 07:58:22+    0 [165C] 00  M>     >  4 CIMEngine5::Close  'IMEngine5.cpp(3423)'
12.17 07:58:22+   31 [165C] 10  M>      >  5 PTFileSystemMonitor::Enable  'PTFileSystemMonitor.cpp(756)'


There is also this line:
12.17 07:58:22+    0 [165C] 00  I>   # Process Memory Info: WSC: 654MB, WSP: 829MB (NEW PEAK), PF: 1300456

Is this unusual?





Best wishes from Switzerland! :-)
Markus

sinus

Hmmm, curious.

The diagnostic (at the end of the command in the box) showed no error, everything is ok.
But when I open the log of the diagnostics, puh, then I see this here:

Analyzing database:
    ERRORS were found in database file structure!
This usually indicates that the database file is physically damaged on disk. Such errors cannot be repaired.
You should RESTORE THE LAST KNOWN WORKING BACKUP of your database!    '*** IN DATABASE MAIN ***
ON TREE PAGE 36613 CELL 0: INVALID PAGE NUMBER 1177510464
PAGE 36610 IS NEVER USED
PAGE

36611 IS NEVER USED
PAGE 36612 IS NEVER USED'


Sounds not good. Because the endbox of diagnostics showed never a problem, I think, I have simply backuped this damaged?? database.
Best wishes from Switzerland! :-)
Markus

Mario

Hm, as far as I can tell, all errors reported by the database system are logged and add to the error count - and thus should show up in the dialog. I'll need to simulate and check this.

Your database file has become damaged on disk. The internal database structure is damaged. This is probably why the Compact fails.
Did you have any hardware-related problems? And IMatch crash cannot cause such damage, at least as long as Windows is able to flush file buffers (which is independent from IMatch properly closing or crashing). Maybe moved a database file around, but only the .imd5 file and not the other files with the same name as the database file?
-- Mario
IMatch Developer
Forum Administrator
http://www.photools.com  -  Contact & Support - Follow me on 𝕏 - Like photools.com on Facebook

sinus

Quote from: Mario on December 17, 2014, 01:02:58 PM
Did you have any hardware-related problems? And IMatch crash cannot cause such damage, at least as long as Windows is able to flush file buffers (which is independent from IMatch properly closing or crashing). Maybe moved a database file around, but only the .imd5 file and not the other files with the same name as the database file?

I had no hardware- problems. But once I moved the .imd5 from one place (harddisk D) to another (harddisk C).
But I have seen only 1 file, the xxx.imd5.

Are there other files, what I should have moved too? (of course I moved the file, when IM5 was closed  :D )
Best wishes from Switzerland! :-)
Markus

sinus

Hi Mario

Sigh, I checked everything.  :-\ :-[ No valid backup of the db. All what I have, does have the same trouble: cannot compact, IMatch displays the same error.

So, if you do not know a miracle, I guess, I have to totally create a new db.

- categories:
I have categories, what I would like to have also in the new db, I guess with export/import and filelinks I could solve this!?
- Attributes: I could export/import them, I think? (would not be that bad, if not, because I have not a lot)

- Metadata:

a) jpg, tifs: because embedded should not be a problem, simply import?!
But first, should I do a write-back, to be sure, that the metas are in the files?

b) nefs:
this troubles me: I have, say 50'000 nefs without sidecars, the metas are simply in the IM5-db, this, because I think, I could avoid to have 50'000 files (sidecars) on disk, because these nefs are not edited (good pictures, but not edited, hence only touched from IMatch)
I do not know, first, is this a good choice? Maybe it would be wiser, simply to create sidecars for all nefs?

c) nefs
with sidecars, about 20'000. I guess, here I could only do a write-back and then the import should not be a problem?!

Sorry for the lot of questions, but I am about to loose my db and hence I want do it correct the next time and for transferring some data.

That should be all for creating a new db and safe as much informations from the damaged db as possible?

The damaged db works still fine, diagnosics shows no error (except that what I reported above). BTW: I do not think, that this error has something to do with my crashes and collections-problems, because then I could compact my db.

Hmmm, I have a pack'n'go from september, maybe this would be better than creating a new db?! SORRY for the lot of questions, but I am bit "out of order"  8) (sollte wohl ruhig Blut bleiben)
Best wishes from Switzerland! :-)
Markus

JohnZeman

Markus I'm surprised you don't have a fairly recent backup. ??

Ger

Markus,

With regards to b). Can't you use the text export routine to copy all metadata to a text (csv) file and use the Import CSV routine to import in your new database?

Ger

sinus

Quote from: JohnZeman on December 17, 2014, 03:20:47 PM
Markus I'm surprised you don't have a fairly recent backup. ??

Yes, John, you can can be surprised and you are right, of course. I have unfortunately only backuped about 2 weeks and all these db have the same error.

And the backups before I simply deleted some days. Well, it is simply my own fault to delete them, but I did not thought, that the other backups are useless. Again a lesson for me ... though a hard one.

Stupid me.
Best wishes from Switzerland! :-)
Markus

sinus

Quote from: Ger on December 17, 2014, 03:30:29 PM
Markus,

With regards to b). Can't you use the text export routine to copy all metadata to a text (csv) file and use the Import CSV routine to import in your new database?

Ger

Hi Ger,
thanks, yes, that could be a solution. But maybe (I do not know) it is easier to write back all stuff and then rescan. I have to look, I guess, before I do anything, I will sleep about it ... or maybe Mario has a miracle-idea. But he cannot spell a magic word, I am afraid.
Best wishes from Switzerland! :-)
Markus

JohnZeman

Quote from: sinus on December 17, 2014, 03:31:18 PM
I have unfortunately only backuped about 2 weeks and all these db have the same error.

And the backups before I simply deleted some days.

Oh, so you did have backups, just not one that went back to before you had the problem.  That I can understand and I can see where it could happen to almost anyone including me.  :o

I have been purging (deleting) my weekly backup files once they are 4 weeks old, I'm going to change that to 6 weeks now after reading your experience here.

sinus

Quote from: JohnZeman on December 17, 2014, 03:46:32 PM
I have been purging (deleting) my weekly backup files once they are 4 weeks old, I'm going to change that to 6 weeks now after reading your experience here.

Yep, this is wise from you. I will do also add this "delete-time", but I will add it to 3 month ;)

I am on the way to look for an pack-n-go, I found one in september, that would be better, maybe, then creating a new db. But I have putted such pack-n-go on disk, maybe I will find a newer one.

Sigh, always to lern. Maybe I can avoid to create a new db. Finger crossed.
Best wishes from Switzerland! :-)
Markus

Ger

QuoteI have been purging (deleting) my weekly backup files once they are 4 weeks old, I'm going to change that to 6 weeks now after reading your experience here.

I had this thought a few weeks ago. I'm normally doing weekly backups on two different external drives , and I once-in-a-while make a backup to store off-site. That means that I can only go back two weeks on the weekly backup; the off-site copy might be 3 or 4 months old. Nothing in between.

So, same here: this threat reminds me to really reconsider my backup process.

Ger


Ferdinand

Markus - we all feel your pain.

John - this is where software that does incremental backups is useful.  I can restore from most days over the last 12 months, perhaps 18.  Depends on when I started a new disk.

JohnZeman

Quote from: Ferdinand on December 17, 2014, 11:24:05 PM
John - this is where software that does incremental backups is useful.  I can restore from most days over the last 12 months, perhaps 18.  Depends on when I started a new disk.

Thanks Ferdinand.  I use xxcopy, a powerful CMD line file manager, and that along with 4NT gives me all the flexibility I need to adjust my incremental backups as needed.  A simple one line change in my script solved my problem, I just never thought it could be an issue to not have backups going back very far until Markus shared his problem with us.

Mario

Markus, please contact me via email.
I will send you an analysis tool from the database vendor. Maybe we can export the database to a secondary format, and re-create a new database. The export/import into a new database may be able to work around the sections in your database file which are damaged.

Since damaged databases are so rare, the database vendor may be interested in the findings as well.

You should also use one of the slower write modes for your system (Edit > Preferences > Database). Use the "Normal" mode, which saves data more often to disk. If your system has sometimes trouble writing back data to disk under stress, this may help. The error you report only happens when Windows fails to write data to the physical disk after the database system has written it. Windows reports the data as written and the database system closes the transaction. But then Windows fails to write the data from the disk cache to the disk (or the disk fails to write the data from the cache to the disk) and, bang, the database is damaged.
-- Mario
IMatch Developer
Forum Administrator
http://www.photools.com  -  Contact & Support - Follow me on 𝕏 - Like photools.com on Facebook

Ferdinand

Quote from: JohnZeman on December 18, 2014, 02:21:59 AM
I use xxcopy, a powerful CMD line file manager, and that along with 4NT gives me all the flexibility I need to adjust my incremental backups as needed.

I used to use xxcopy, but ultimately it wasn't enough.  Glad that it works for you.

I hope this vendor's tool does something for you Markus.

sinus

Quote from: Ferdinand on December 17, 2014, 11:24:05 PM
Markus - we all feel your pain.

John - this is where software that does incremental backups is useful.  I can restore from most days over the last 12 months, perhaps 18.  Depends on when I started a new disk.

Thanks, Ferdinand
Restoring over the last 12 monthes or more: super!
Best wishes from Switzerland! :-)
Markus

sinus

Quote from: Mario on December 18, 2014, 08:20:30 AM
Markus, please contact me via email.

Thanks, Mario, I will do so!
Best wishes from Switzerland! :-)
Markus

Richard

Hello Mario,

Since IMatch runs a database diagnosis plus Compacting and optimizing, shouldn't any database damage be reported during a backup?

ubacher

...a thought about backups: I wonder how many backups are out there in this world which when needed
don't work. Who - let's be honest - has taken the trouble to seriously test the backup system?  Just takes too long.
And takes extra hardware.

About Pack&Go: I recently tried to restore a package (don't recall why I needed to - I think it was also a damaged db)
and found it did not work: reported a checksum error on the db file! Conclusion: Check your Pack&Go every now and then -
don't blindly trust it.

Ferdinand

From time to time I have the need to retrieve a backup of a few files from a specific date from my layers of incremental backups and it's always worked.  When I set this current system up early last year I did test a restore of the a backup of the OS partition and that also worked, but it's something I don't need to do very often, like once every 5 years.

sinus

#21
Quote from: Mario on December 18, 2014, 08:20:30 AM
Markus, please contact me via email.
I will send you an analysis tool from the database vendor.

Hey Mario, I sent you an email yesterday. But I could solve my problem (I think at least) with my latest pack-n-go.
But if you think, it makes sense for you or for the vendor, you can send me of course the tool and I will use it with my damaged db, what I have stored, otherwise you could ignore my mail.

If this is not necessary, I have solved this with the newest pack-n-go.

The very newest was from 6. Dezember 2014, but this file was corrupt.
So I tried the next one, from 21. November and this time it worked. So I have only "lost" the images from 21. Nov. until now.

I did store my collections (pins) in 3 iptc - fields, and did write-back the metas in the old damaged db (what still works, but cannot compact).

With the restored db I could import the missing images from 21. Nov. until now and all the metas are fine. I lost of course the collections, but now I can easily search for the metafields with the entries "red pin" and so on, then I can add the pins, delete the metafields again and I should have again a good, working db with the newest images. :)

I had for these images only a few Attributes, I will add them later, will not take a lot of time.

I did run a diagnosis, no error (except the described @builder-error), and the compact-command worked also fine! Yeah!

So my problem, I hope and think, is solved!

BTW: I will think about a better backup of my workflow (THANKS to all postings here, folks!!!).
I will also think about, MAYBE the most important collections also storing in some metadata-fields. Like in the "good old IM3-times" I would then have almost ALL important informations inside my images, embedded (jpg, tifs ...) or with sidecars (nefs). Even if I then would go to another DAM (what I do not hope  8), I could recover all my important stuff, because all is inside the images or sidecars. This has helped me a lot in IM3.
Best wishes from Switzerland! :-)
Markus

sinus

Forgot to mention:

Because I do for stacking use the auto-stacking-possibility and I use there a metadata-field, also restoring the stacking is for me fortunately not a problem.
I have simply select the files in a folder and let run the autostacking-command.

Very convenient!  :)
Best wishes from Switzerland! :-)
Markus