You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Wireframe for UI, but we should clean it up, now it has a lot of excessive elements.
Knowledge graph [[INCOSE]], role [[system operator]], the test article is this. By the way, the HTML code does not contain the article, it won't work. What do we do in such cases?
Problem. For people, it is challenging to create structured notes, especially using templates like inputs-activities-outputs. Also, to make a proper knowledge graph, we need frontmatter and other enriched metadata, and it's even more difficult. LLMs and other AI solutions are unreliable and have little value without a knowledge graph. As stated in the blog post, we should use explicit user feedback to improve search results, and we cannot do that without the knowledge graph.
Context. When we tag content, we should use wider terms from the standard (controlled dictionary), not terms from the content itself (it's good when they coincide, but it may be only in the title and headers). Otherwise, it will be unmanageable. We will use ChatGPT-generated summaries to contrast the ideas from the text. Data sources on the user side are siloed. We unify them in the process model by tagging them and making them discoverable with the meta.
Solution. We map narrower terms to wider terms and lead users through the checklist to control the quality of notes. Each wider term has an out-of-the-box context to help enrich the note:
The user renames generic relations (specific relations rename automatically), answers checklist questions, and sends these notes to the Logseq journal (Terraphim also copies it to the clipboard). Still, the open question is how we use those notes as user feedback to improve the search.
Features.
GitHub login status visualization and Terraphim icon.
Terraphim Cortex navigation - the user should be able to browse and select different articles sent to the Cortex for analysis.
wider terms links to the Logseq page.
selection of a narrower term marked as attributes in TFinputs for the [[system operator]] role that the user found in the article.
CSV export of the mapping results for basic semantic analysis in Excel.
wider terms graph visualization and narrower terms graph visualization with relationship naming.
text rendering with paragraphs of the article and mini-map navigation of the paragraph that the user is looking at now.
Benefits.
Deep reading. Together with ChatGPT summaries, it provides insight generation and critical thinking tools and helps develop original ideas about the texts. Practically the user can:
turn it into a personalized message to establish or improve a connection to the authors,
make an original publication and improve personal brand
get insights into his SFIA or SE skills
These things require further feature development, but this is a good start.
The text was updated successfully, but these errors were encountered:
Wireframe for UI, but we should clean it up, now it has a lot of excessive elements.
Knowledge graph [[INCOSE]], role [[system operator]], the test article is this. By the way, the HTML code does not contain the article, it won't work. What do we do in such cases?
Problem. For people, it is challenging to create structured notes, especially using templates like inputs-activities-outputs. Also, to make a proper knowledge graph, we need frontmatter and other enriched metadata, and it's even more difficult. LLMs and other AI solutions are unreliable and have little value without a knowledge graph. As stated in the blog post, we should use explicit user feedback to improve search results, and we cannot do that without the knowledge graph.
Context. When we tag content, we should use wider terms from the standard (controlled dictionary), not terms from the content itself (it's good when they coincide, but it may be only in the title and headers). Otherwise, it will be unmanageable. We will use ChatGPT-generated summaries to contrast the ideas from the text. Data sources on the user side are siloed. We unify them in the process model by tagging them and making them discoverable with the meta.
Solution. We map narrower terms to wider terms and lead users through the checklist to control the quality of notes. Each wider term has an out-of-the-box context to help enrich the note:
The user renames generic relations (specific relations rename automatically), answers checklist questions, and sends these notes to the Logseq journal (Terraphim also copies it to the clipboard). Still, the open question is how we use those notes as user feedback to improve the search.
Features.
Benefits.
Deep reading. Together with ChatGPT summaries, it provides insight generation and critical thinking tools and helps develop original ideas about the texts. Practically the user can:
These things require further feature development, but this is a good start.
The text was updated successfully, but these errors were encountered: