LLM (AI)
Inkdown has built-in large language model conversation functionality. Current large language models have accumulated vast amounts of knowledge content. Better utilization of large language models can greatly expand personal knowledge boundaries and make personal document management more convenient.
The following platform models are currently supported:
- claude
- deepseek
- openai
- qwen
- gemini
You can configure them in the settings, as shown in the image
Currently, Inkdown uses conversation segmentation as a compression method for long conversation records. You can adjust the maximum number of conversation rounds as needed. The longer the conversation rounds retained, the more information is preserved in long contexts. The fewer conversation rounds retained, the more input tokens are saved in long contexts.
If you are not familiar enough with the operation process of large language models, it is not recommended to change the configuration - you can use it directly.
File Input
Many models already support image understanding. You can add images to conversations, and the model will automatically understand the image content.
In addition, Inkdown also supports adding excel, pdf, word and other files. Since each platform supports file formats to different degrees, Inkdown adopts manual parsing and adding to context approach, so that any model can understand attached files when used. However, it should be noted that larger attachments may occupy a lot of context, and sometimes may even exceed the maximum context supported by some models.
Writing to Documents
Each response from the large language model can be written into documents, and all conversation records in the chat can be written into documents at once. You can find them in the interface
Text Vectorization
Inkdown has built-in text vectorization functionality. Vectorization is for semantic document search, and its most critical role is to provide your own document content as context to large language models.
For example, you can ask: I have written some database and operations-related documents, help me list them and summarize their general content.
Vectorized search is different from traditional search. Completely identical characters are not necessarily found by vectorized search, but those with high semantic similarity may be indexed. For example: What programming language knowledge have I recorded? This might search for js, java, swift and other related notes. However, directly searching for "jav" might not return related results.
You can also use the @
operator to precisely introduce documents as conversation context.