***





Assorted topics

Part 2 ["there's MORE!"]

The roundup continues...

We continue looking at even more 'cutting edge' technologies, and industry implementations - of retrieval, recommendations, knowledge extraction, etc.

This much is for sure - the field of 'IR' is one of the most RAPIDLY changing fields! Why? Because information is what runs society :)

External memory to enhance generic chat

The idea of using an LLM as a generic text engine (that has no deep 'domain knowledge) ALONG WITH 'external memory' [a custom 'DB' that contains content knowledge] is rapidly gaining ground!

https://towardsdatascience.com/use-chatgpt-to-query-your-neo4j-database-78680a05ec2 shows how ChatGPT can be used with Neo4j (a graph DB). The idea is this: rather than query Neo4j using its own 'Cypher' query language, we can use natural language instead.

https://tdoehmen.github.io/blog/2023/03/07/quackingduck.html shows to use chat to generate SQL!

RR - Rethinking with Retrieval (https://arxiv.org/pdf/2301.00303) solves reasoning-based tasks by using an external KB (knowledge base).

'Retrieval Transformer' is an approach that also uses external memory to keep the core LLM size DOWN [as low as 4%!!] of a regular LLM - https://jalammar.github.io/illustrated-retrieval-transformer/.

Autonomous task-achieving

Rather than carry on a back-and-forth conversation with an LLM to achieve a task, what if we could specify the task, and let the LLM AUTONOMOUSLY solve it, using sub-goals? This is classic 'agent-based' architecture, an elusive idea in AI thus far!

AutoGPT [eg. https://www.digitaltrends.com/computing/what-is-auto-gpt/] is a brand new approach that does this.

Another radical idea is use LLMs to simulate human behavior, giving rise to 'generative agents': https://arxiv.org/abs/2304.03442.

LLM+tasks+memory -> a 'computer' system!

Here is yet another idea: treat the LLM+tasks+DB as a 'computer' (analogous to processor+code+data)!

BabyAGI [eg. https://finance.yahoo.com/news/babyagi-taking-silicon-valley-storm-121500747.html] is such an attempt.

LangChain is a task programming language where we use specific commands in the form of 'templates', to compose our queries, and run() them, eg. https://www.pinecone.io/learn/langchain-intro. SudoLang is another such task-spec language.

'OPL' is the name we could give to such a stack comprised of O(penAI)+P(inecone)+L(angChain), eg. https://towardsdatascience.com/building-llms-powered-apps-with-opl-stack-c1d31b17110f

Also :)

NER

'NER' (Named Entity Recognition) as you know, is a useful NLP information-related task - given text, or images, or video, or audio, what person/place/thing/... can we identify?

We can use BERT for NER, eg. via PyTorch.

Topic modeling

BERTopic is a topic-modeling technique based on BERT.

Here is a guide.

KG construction

Knowledge graphs (KGs) are an excellent form of knowledge representation, since they are well structured (eg via (s,p,o) triplets). https://medium.com/@dallemang/llms-closing-the-kg-gap-29feee9fa52c shows how we can use ChatGPT to create KGs from plain (ie unstructured text).

Recommendation engines

REs are univerally useful, across multiple domains.

Monolith is TikTok's RE: https://analyticsindiamag.com/tiktok-parent-bytedance-reveals-its-sota-recommendation-engine/.

Twitter's RE:

Lyft's RE: https://eng.lyft.com/the-recommendation-system-at-lyft-67bc9dcc1793.

Clustering

We looked at K-means clustering. Here are others you can look up:

Search

Here is an assortment of 'search' related items:

NeRF, AR LBS

Standard LBS retrieves addresses, maps.

A new Google Maps update will retrieve immersive views, made possible by fusing together numerous distinct photos and aerial views, and seamlessly rendering them using NeRF [more here].