AutoTagger: curious what users think of results so far?

Started by Jingo, February 13, 2025, 04:22:10 PM

Previous topic - Next topic

Jingo

Just started to use the autotagger on a few images and the results are a bit interesting.. curious what others have seen.  

My photos tend to fall into 2 categories... vacation travel and bird photography. My main goal for the autotagger is to get some descriptions that accurately reflect what is in the photo.  So far, my results have been quite mixed and I am curious if my prompt and/or expectations are just simply wrong.  8)

Here is a photo I tried:
IMatch2025x64_R6fzF10kuZ.png

I have already applied Keywords, GPS coords and used reverse geo to gather accurate location data for the image. The simply keyword of Animals|Bird|Seagull was applied to the image.

I tried using Mistral with the following prompt:

[[-c-]] Describe this image in a style to make it easily searchable. Use simple English, common words,
factual language, and simple sentences.
Avoid describing anything not directly observable from the image.
Use {File.MD.hierarchicalkeywords} to assist in generating the description and information
about the photo if this value exists. Use {File.MD.XMP::exif\GPSLatitude\GPSLatitude\0} and
{File.MD.XMP::exif\GPSLatitude\GPSLongitude\0} to help determine the location the photo was taken
and provide information about the photo location.
Return three to five keywords that are not already present in the files {File.MD.hierarchicalkeywords} metadata tag.

The AI tags returned were:
AI Description: A solitary seagull flies over shallow, calm water.
AI Keywords: Objects|Water; calm; solitary; flying; shallow
AI Landmarks: <blank>

Objects|Water is ok.. that is from my thesaurus and sure.. it is water.  calm and solitary seems ok too.. the water is calm and the bird is by itself.  Flying is wrong and not sure how the AI knows that the water is shallow... so those are meh.

The seagull is not flying... so the Description is very wrong.  And, I didn't get any information about the place even though I provided GPS coords.

I also set up a trait but this didn't seem to work - the trait was to add an AI.bird tag with prompt: Does this image contains one or more birds? Only answer with 'Yes' or 'No'.  Store the result in the tag birds.

I also used the example animals one but neither generated the traits under AI Tags metadata.

I also tried Ollama using 13b and... it failed pretty miserably with the same prompt.

Anyway.. just curious what I can do to improve the results and hear thoughts from others as well. Maybe I am expecting too much or my prompt and settings are just not ideal.

Thx!! - Andy.



Mario

There are many prompt tutorials available for OpenAI, Mistral and others.
OpenAI offers a prompt tool (I have linked to it in the help).

Sometimes a simple prompt leaves the AI more room.
Also, the creativity setting has a great impact. Play with it, e.g. between 20 and 80 for the same seed value.

Mistral offers a simpler cheaper model (the only one offered by IMatch in versions before 2025.1.12) and a more expensive model (pixtral-large-latest) available in IMatch 2025.1.12 and later.

Use {File.MD.hierarchicalkeywords} to assist in generating the description and information
about the photo if this value exists.

complicates things unnecessarily. And costs money, since your prompt is much longer than needed.
Use the examples in the help to see how hasvalue can be used to add this part only to the prompt when there are keywords.

Same for your GPS coordinate section in the prompt.

Also, the bird is really quite small. AutoTagger by default uses 512 pixels for Mistral when I'm not mistaken.
Resize the image to 512 pixels and look at it. Fine details will be gone.

Maybe when you enable the large image option, you will get better results for images with such small details?

Jingo

Thx Mario.. all good suggestions and I will continue to play around.  Any thoughts on why the traits just didn't work for me at all? Do they only get set if the AI returns something "valid"?

Mario

Trait tags are like any other prompt. AutoTagger sends the prompt to the AI and whatever is returned is written to the trait tag. I have experimented with this with OpenAI, Mistral and Ollama models and documented my findings in the help (prompting topic).

In my experience, the issue is often the prompt or the creativity setting.
Like with any modern AI, finding the right prompt for a purpose is sometimes tricky, a bit of magic or science ("prompt engineering"). It helps if you read some of the tutorials around on the web for all AIs in use today. Often a specific work, phrasing or excluding something coerces the AI to do what you want.

It also of course depends on the data the AI was trained with. If the AI has never seen an image of the Hommingberger Gepardenforelle, it won't be able to identify it. It might consider it a fish or even a trout, but that will be it.

When we get better models, or, more likely, models trained for specific purposes like taxonomy in a format we can run with Ollama or on cloud-AI, things will be better.

I'm also waiting for "fine-tunic" (custom trailing) models to become doable by normal people on normal hardware. Then we can teach AIs what we are interested in, the animal species in our photos, car models, bike models, instrument models, plant species etc. and make the models fit better for our purposes.

You can do that with features offered by OpenAI and Mistral, but this requires money and a lot of know-how still.


photophart

#4
I only have limited experience with Ollama so far. I'm also a devout AI skeptic. However, I have to say I'm pleasantly surprised at what Ollama can do using the llava:13b model. It creates wonderfully colorful phrases when building photo descriptions. But, it does get the subject matter wrong sometimes, quite wrong. Knowing its limitations I compensate by quickly scanning the AI output when it has completed a batch of a hundred or so images. I manually edit the errors. Overall I'd say it is much, much faster than having to do those descriptions the way I've always done it. The same thing applies to keywording: overall a positive improvement in how I do things, but keeping an eye on what it is doing is required. So, yes, my AI skepticism has softened quite a bit, I actually kinda like it.
Mark

mopperle

#5
The quality of the results IMHO highly depend on what you are doing with your pictures and what you expect.
An example: for me it is important to have details in the description which are not obvious. I have a picture of family members celebrating Aunt Marys birthday. That's what I would put in the description. Any AI makes something like "some older people sitting around a table drinking coffee and eating cakes". This might be ok, when you send the picture to an agency, but for private use it is not really usefull.
Keywords: I'm in a situation like @Jingo: so for me it would be helpfull if the AI recognizes the bird (or other animals) and uses the scientific name and the common name in my native language. Works only rarely. Might be OK for agencies.
Beside Ollama I tested Microsoft Vision, Google AI and Imagga. The results were very mixed. None of those AIs won the 1st place. Each of them produced in some cases total nonsense.
As Mario pointed out, it also depends highly on the prompt you are using. So you have to try various prompts and settings fitting best to the genre of your pictures (landscape, birds).
I also highly recommend to trial all AIs, most of them offer some free stuff and/or are not to expensive. For the Pros amongst us price shouldnt matter.
Currently for me there is no AI that fully satisfys me.

Mario

As for keywords, I found it very helpful to let AutoTagger collect the keywords produced by the AI in the Keyword Mapper and then adjust my thesaurus to match or ignore, map or extend keywords the AI produces. This gives me the hierarchical keywords I'm used to. Unless your photography topics vary widely, there is a finite amount of keywords the AI will produce.

LLaVA 13b is an improvement over the 7b model, but you'll need a graphic card with 16 GB VRAM to run it. The 33b model is probably even better, but there are no consumer / affordable graphic cards with that much memory available. Even the new 2k$ NVIDIA 5090 as "only" 24 GB RAM.

Ollama, LMStudio etc. can handle multiple graphic cards and distribute a model, so maybe, if I win in a lottery, I buy a second 4060TI and then have 2 GPUS with 32 GB RAM in total to play with :o ;)

Researches all over the world work on making AI better, cheaper to train and needing less hardware to run.
We can just learn back and profit from it all.
And thank companies like Meta, Mistral and Alibaba for making their expensive models available as open source.

You might be surprised how much "better" the massive models available via OpenAI and Mistral are.
And affordable. I've processed about 3000 images yesterday with Mistral while improving performance. This is what my billing shows today:

Image1.jpg

12 euro cents for 3,000 files and almost a million token. That's affordable, even for private users.



Mario

QuoteAn example: for me it is important to have details in the description which are not obvious. I have a picture of family members celebrating Aunt Marys birthday. That's what I would put in the description. Any AI makes something like "some older people sitting around a table drinking coffee and eating cakes". This might be ok, when you send the picture to an agency, but for private use it is not really usefull.

Did you consider including the names of the person shown in the image in the prompt via a variable. This usually makes the AI include the person's names in the description. The AI cannot know who the people are, unless you tell it.
See: Using Persons in the prompting topic.

Quoteand uses the scientific name and the common name in my native language.
Do you ask the AI to identify the species/race/breed and include the Latin name?
Note that large models like OpenAI and Mistral are usually much better at this than downsized models you can run in Ollama.


QuoteBeside Ollama I tested Microsoft Vision, Google AI and Imagga.
The Microsoft and Google are old-style legacy models. OpenAi and Mistral are hundred times better.

QuoteFor the Pros amongst us price shouldnt matter.
Mistral has a free "Experiment" option. But when you look at my pricing example from yesterday (3000 files for 12 cents) you will find this very affordable.

Be precise in your prompt. Ask the AI to identify the animal and return the taxonomy name. If you from Europe or you care for privacy and a company releasing their models into the open, try Mistral.

mopperle

QuoteDid you consider including the names of the person shown in the image in the prompt via a variable.
No, as i do not use face recognition.

For your other suggestions: we discussed all this a while ago in the testbed. ;)
Imagga is not to bad, as it offers german answers. Ollama is also quite good, but my hardware is not capable of running a bigger model. So I will move on with Imagga, but also try Mistral, althogh I'm a bit confused about the products/pricing, their website seems to be a bit chaotic.
What exactly do you use?

Mario

For Mistral, you'll want the API (Le Platforme) not Le Chat.
Click on try the API, create an account and a workspace. The "Experimental" tier is free and the default in IMatch.
The workspace gives you the API key to enter in AutoTagger.

Prompt for Mistral:

[[-c-]] Beschreibe dieses Bild in deutscher Sprache in ein- oder zwei Sätzen.


Image1.jpg Image2.jpg

Well worth to experiment a bit. Like: "Describe this image in German language in the style of..."
Mistral is an European product and multi-lingual by nature. Asking for a specific language and maybe telling in the prompt that the response must be grammatically correct and without typos helps.

monochrome

Quotecurious what others have seen

"Not perfect, but incredibly useful".

Similar to you. It gets the big stuff mostly right but not the small stuff. It's very much hit but a lot of misses too. What it does enable me to do is put something into the search box and get fairly good results from 200k images. All I wanted was a way to roughly organize my photos and videos.

I have been able to find some images that I would never have been able to find without autotagger.


mopperle

QuoteFor Mistral, you'll want the API (Le Platforme) not Le Chat.
Click on try the API, create an account and a workspace. The "Experimental" tier is free and the default in IMatch.
The workspace gives you the API key to enter in AutoTagger.

Prompt for Mistral:

[[-c-]] Beschreibe dieses Bild in deutscher Sprache in ein- oder zwei Sätzen.
Did all this yesterday and it seems to work once, but now I only get the Autotagger error:
2025-02-14 11.40.05 000.png

My settings:
2025-02-14 11.41.35 000.png
2025-02-14 11.18.16 000.png

No idea what the problem could be. Also on the Mistral AI website/dashboard I don see anywhere the limitations of the experimental/free tier. All very weird.

Mario

Die screen shot nützen nichts in fiesem Fall.
Das log file (see log file) enthält die Antwort von Mistral und auch die Fehlermeldungen, die von Mistral geliefert wurden.

Hast Du einen bezahlten Account für Mistral oder nutzt Du das kostenlose Angebot? 10 Aufrufe sollten aber kein Problem sein.
Ohne Log File unmöglich zu diagnostizieren.

Ich habe gerade AutoTagger mit Mistral getestet und es funktioniert.

mopperle

So sehen die Fehler im Log aus:
Quote02.14 11:17:59+76359 [0F04] 02  I> PTCAIConnectorMistral: 1 HTTP Status Code: 422 '{"object":"error","message":{"detail":[{"type":"extra_forbidden","loc":["body","seed"],"msg":"Extra inputs are not permitted","input":2,"url":"https://errors.pydantic.dev/2.10/v/extra_forbidden"}]},"type":"invalid_request_error","param":null,"code":null}'
02.14 11:17:59+    0 [4B70] 01  W> AutoTagger: Aborting because of error
  • '{"object":"error","message":{"detail":[{"type":"extra_forbidden","loc":["body","seed"],"msg":"Extra inputs are not permitted","input":2,"url":"https://errors.pydantic.dev/2.10/v/extra_forbidden"}]},"type":"invalid_request_error","param":null,"code":null}'  'V:\develop\IMatch5\src\IMEngine\IMEngineAIAutoTagger.cpp(369)'
02.14 11:17:59+  16 [4B10] 01  W> UpdateQueue (AutoTagger): Service error 0 '{"object":"error","message":{"detail":[{"type":"extra_forbidden","loc":["body","seed"],"msg":"Extra inputs are not permitted","input":2,"url":"https://errors.pydantic.dev/2.10/v/extra_forbidden"}]},"type":"invalid_request_error","param":null,"code":null}' for file [43173]  'V:\develop\IMatch5\src\IMEngine\IMEngineUpdateQueueAutoTagger.cpp(233)'
02.14 11:17:59+  187 [6408] 05  I> Show notification 'custom' ac: 0

  • Plan ist der kostenlose:
2025-02-14 14.02.13 000.png


Mario

"Extra inputs are not permitted" sounds like something in the prompt sent to Mistral was not correct.
Did not saw this before.

Repeat your test but switch to debug logging via Help menu > Support.
This logs the prompt sent to the AI to the log file.

Open the log file and search for MIST-PROMPT:
Copy the corresponding section from the log into your reply and also copy the prompt you have used.

mopperle

Ok, this is the "MIST-PROMPT"
QuoteMIST-PROMPT:
Respond in JSON with these keys and values in order: \"keywords\": Return ten to fifteen keywords describing this image.
For this test I used these settings; the text in the Prompt-Editor is the default IMatch prompt?
2025-02-14 15.08.34 000.png

mopperle

Forgot to add, that I didnt add a prompt after pressing F7 on the picture.

Mario

That's just the first line of the prompt. Attach the ZIPped entire log file please, there is much missing.

monochrome

Quote from: Mario on February 13, 2025, 04:31:53 PMMaybe when you enable the large image option,

Going back to the original question by Jingo, I have three answers:

  • Some kind of two-pass processing. First, use Inception or some other object detection network to get bounding boxes, then use those bounding boxes to create the images sent to the AI for tagging. In this case, you would, for example, first run InceptionV3 and for any region classified as "bird", send that off to OpenAI or Mistral or whatever. Use the result and put it in a trait tag. IMatch does not currently support this.
  • Allow the user to mark up regions of interest for AI tags in photos, much like the person tagger works. Then create images based on these and send them to the AI. I don't think IMatch currently supports this?
  • Since edited photos should have a "strong subject", one option may be to propagate AI tags in reverse - from versions to masters. The edited version of the sample photo, for example, would probably have a tighter crop of the bird.


mopperle

Quote from: Mario on February 14, 2025, 03:25:46 PMThat's just the first line of the prompt. Attach the ZIPped entire log file please, there is much missing.

Mario

Quote from: monochrome on February 14, 2025, 03:28:13 PM
Quote from: Mario on February 13, 2025, 04:31:53 PMMaybe when you enable the large image option,

Going back to the original question by Jingo, I have three answers:

  • Some kind of two-pass processing. First, use Inception or some other object detection network to get bounding boxes, then use those bounding boxes to create the images sent to the AI for tagging. In this case, you would, for example, first run InceptionV3 and for any region classified as "bird", send that off to OpenAI or Mistral or whatever. Use the result and put it in a trait tag. IMatch does not currently support this.
  • Allow the user to mark up regions of interest for AI tags in photos, much like the person tagger works. Then create images based on these and send them to the AI. I don't think IMatch currently supports this?
  • Since edited photos should have a "strong subject", one option may be to propagate AI tags in reverse - from versions to masters. The edited version of the sample photo, for example, would probably have a tighter crop of the bird.


Now we're mixing general questions about AI with bug reports and feature requests. All in the same thread, in out-of-sync order. Unreadable. Unmanageable. And I cannot longer split the bug report out because there are replies from different users.

1. The OpenI / Mistral services do that already. This is why they can identify bicycles and dogs in an image.
2. FR?
3. Just inverse your version then, consider the version the master. I will for sure not change the already super-complex versioning and propagation to allow for reverse propagation for all or selected tags. No, thank you.

Mario

Quote from: mopperle on February 14, 2025, 03:37:27 PM
Quote from: Mario on February 14, 2025, 03:25:46 PMThat's just the first line of the prompt. Attach the ZIPped entire log file please, there is much missing.

That's the entire prompt, after all. I did not notice that you had disabled the description in your last screen shot. It was enabled in the first screen shot.  I can use the same setup and same and get no error from Mistral. The error message explanation does not tell me anything: https://docs.pydantic.dev/2.10/errors/validation_errors/#datetime_past

Have you changed any other settings for Mistral, e.g. rate limits?
Try this prompt:

Return ten keywords for this image.



mopperle

Sorry, for mixing this topic with a (maybe) bug, but was not sure whether it is a user error while using Autotagger.

No other setting changed, these are my settings:


And with this prompt I get the same error, debug log attached.
2025-02-14 16.30.27 000.png

Mario

I have removed the screen shot which included your API key. I recommend you create a new API key in case some nefarious bot has already snatched it up. It's a free account, but still.

You this time used the prompt Return ten keywords for this image Return ten to fifteen keywords describing this image.
I meant to replace the actual prompt with my suggestion, not use my suggestion as the context.

Please set seed to 0 in your preset, which is the default. (you have changed it to 2) 

I think Mistral does not like the seed parameter.
It seems Mistral is not 100% compatible and named "seed" "random_seed" instead. I need to fix that in the code. All others use "seed".

mopperle


Mario

Very good. Next time when you encounter something not working or that feels like a bug, please open a new thread.
Mixing different topics in the same thread is always confusing.

Stenis


I´m a user of PhotoMechanic since four year and have a working Photo-DAM that I have been very pleased with for years for my around 70 000 pictures. I also use DXO Photolab and Capture One in parallell of different reasons and it has worked very well I think - not using their image libraries.

BUT ... the older i get (75) I start to feel discontent over how slow PM has developed last years seemingly stuck some where decades back when it comes to the user interfaces and that it seem to have gotten in the hands of venture capitalists resulting in more than doubling of the prices in just a year. For me productivity is extremely important and a growing factor  by age. So of that reason I´m evaluationg iMatch.

To the point:

It took a few hours to get it up and running with AoutoTagger and all including configuring the OpenAI API. I also don´t use hierarchical keywords so that took a few tweeks too to get it right.

I´m soo, soo impressed with the quality og the descriptions and flat keywords it produces with the help of OpenAI now with the first letters in Capital style too that I almost find my self sort of "hoovering" a few feet over the floor. This is really what I have been waiting for. For me it is really totally sufficient to use use the AutoTagger dialog to put OpenAI on track with a few words.

There is just one thing that is not 100%. I have seen that texts I have added in the Description and Keyword elements gets overwritten despite I have checked the "Merge"-box in the Autotagger set up in the "Reference"-meny. Can that be fixed soon??

Stenis

I saw that the "Merge"-checkbox works for the Keyword-field but not for the "Description"-element.

I also think it is good that there is a possibility to add for example a request for not more than five keywords if that is what I please in the prompt at the Description and Keyword elements in the "Preferences"-menu so we don´t need to add that every time in the Autotagger dialog box.

Once more Mario: Thank you so much for the very good work you have done so far with the AutoTagger-function - it is really brilliant together with OpenAI. It will definitely completely change how I will work with my metadata from now on. This will be a tremendous time saver.

Mario

Sounds great. Happy that you like it.

As I wrote in the help, even at the current state, the ability to use AI for keywords and/or descriptions will be immensely helpful for many users.

Even if the AIs are not perfect (yet), or produce results as good as carefully crafted keywords and descriptions made by humans, it's often good enough. And way faster. Users can always go through the results and correct here and there as needed.
In my experience, this is a lot faster than crafting everything myself.

IMatch's advanced features like being able to automatically organize the processed files in @Keywords or via data-driven categories based on AI.keywords or Trait tags makes this even more useful. Or the ability to customize your own prompts and use existing data to guide the AI with context.

The Merge option only works for repeatable tags like keywords, landmarks (stored as keywords) and the AI.* tags.
The dialog should disable this option when the XMP description is used as the target. I will have a look.

dcb

Quote from: Mario on February 13, 2025, 06:50:25 PMAs for keywords, I found it very helpful to let AutoTagger collect the keywords produced by the AI in the Keyword Mapper and then adjust my thesaurus to match or ignore, map or extend keywords the AI produces. This gives me the hierarchical keywords I'm used to. Unless your photography topics vary widely, there is a finite amount of keywords the AI will produce.

Hi Mario,

I'm glad you said that because incorporating the Keyword Mapper results into my Thesaurus is something I haven't been able to get my head around.

If you have an AI keyword for "beach" and want to later add the keyword to a newly created "landform|beach" in the Thesaurus, how do you then add the image with the correct keyword so that "beach" becomes "landform|beach".

I can see you've put a lot of thought into this system so I'm sure it's just something I'm missing.

David
Have you backed up your photos today?

Mario


QuoteIf you have an AI keyword for "beach" and want to later add the keyword to a newly created "landform|beach" in the Thesaurus, how do you then add the image with the correct keyword so that "beach" becomes "landform|beach". 

Mapping is not retroactive. Mapping and thesaurus lookup is performed at the time AutoTagger processes the image.

If the AI delivers the keyword "beach" and you have neither a mapping to "landform|beach" in the AutoTagger keyword mapping or the keyword "landform|beach" already in your Thesaurus, "beach" will be added to the image.

You can later create a top-level keyword "landform" in your Thesaurus and move the "beach" keyword under it.
If you enable the option to apply changes to the database in the Thesaurus Manager, IMatch will change the keywords of all files with "beach" to "landform|beach".
But it's easier to have solid mappings and a good Thesaurus first.

All of this, if and which features you use with keyword mapping and Thesaurus depends on your needs and usage habits.

I've started with a fairly OK-ish Thesaurus (there is always work to do) for my purposes, then ran about 200 or 300 or so "typical" (for me) images through AutoTagger to collect the keywords produced by the AI for these typical images.
If you are shooting very different motives, you may need to repeat this for all motive groups (e.g. family photos, vacation photos, weddings, sport).

The idea is to collect as many of the keywords (finite) the AI produces for your images.
This is the basis for keyword mapping and Thesaurus updates.

I decided which collected keywords to include in the Thesaurus, which to ignore and which to map to one or more additional (hierarchical) keywords via keyword mapping.

The "Don't import unmapped keywords" option can be used to keep keywords not explicitly mapped out of the database.
Or you use the option to "group" unmapped keywords under a base keyword, like AI| for review and easy lookup in @Keywords.

All of these options exist because 10 IMatch users have 11 opinions about how to handle with keywords ;)

Keyword mapping and Thesaurus usage is optional. If you are happy with the keywords the AI delivers, perfect.
Some users may only have to add some exclusions, for keywords they never want.
Other users may find it beneficial to put a bit of work into keyword mapping and their Thesaurus to "get" exactly the keywords they want / are used to from AI.

Mapping and Thesaurus integration  is one of the features that makes IMatch stand out amongst other "AI tagging" software out there.

rvelices

Hello.
I think this is great. Obviously not perfect, but helpful and with a huge margin of improvement.

Personally, I would appreciate 2 AI fields - the date/time when the tagger ran and the provider/model used. There will be a lot of play around with it and it would help comparing, checking, etc...

Mario

Such requirements could e.g. be solved by a "post AutoTagger" action that sets a metadata tag or AI trait tag from a variable...?
Maybe make the AI name and model name available as special "AutoTagger" variables. And {Application.DateTime} as the timestamp.

Feel free to add a feature request so I can learn if and how many users would like that.

dcb

Quote from: Mario on February 15, 2025, 10:07:18 AMAll of these options exist because 10 IMatch users have 11 opinions about how to handle with keywords ;)


That's really surprising? Only 11! :D

I now have keywords mapping to AI|Keyword which is helpful as I don't have to tag them all manually from AI tags to HR tags. You mentioned moving beach to landform in the thesaurus. Does this mean I should also have an AI category in my thesaurus? Otherwise what is there to move.

Thanks for the long response. Nobody puts the time into their software and support that you do.
Have you backed up your photos today?

Mario

#34
QuoteAI category in my thesaurus?
If you let AutoTagger import keywords which are not mapped and not in the Thesaurus, it is wise to put them into a specific level, like "AI|bla...". This works whether or not you have "AI" as a top-level keyword in your Thesaurus.

The Thesaurus is meant to be used to maintain a controlled vocabulary for keywords (curated, fixed set of keywords). "Random" keywords produced by the AI are usually not part of this.

If you already plan to move keywords you map to AI|bla... I would consider it more efficient to set up mapping first.
For example, having "AI|beach" is OK. But having a mapping "Location|beach" or a Thesaurus entry "Location|beach" puts the keywords where it belongs right from the start, without the need to later move or map.

I don't know your personal keyword hierarchy, the images you work with, the AI and model you work with, the keywords it produces for your images. Hence I can only give general tips on how to deal with this.

You can later use the Thesaurus Manager to move keywords the AI has produced on the top-level (or below the "AI" level) to other places in your keywords hierarchy. But if your database is huge or has a complex mix of keywords, the results may not be what you expect them to be if there are ambiguities - see the thesaurus help for details.

Personally, I find it best to get it right with mapping, not "bending" my controlled vocabulary in the Thesaurus too much to accommodate the variety and variations of keywords AI can produce. Tight ship and all that.