import matplotlib matplotlib.use('Agg') # Use non-GUI backend import matplotlib.pyplot as plt import numpy as np import pandas as pd import spacy import time import faiss from sentence_transformers import SentenceTransformer, util from sklearn.decomposition import PCA import textwrap from sklearn.metrics.pairwise import cosine_similarity from utils.model_loader import load_embedding_model from utils.helpers import fig_to_html, df_to_html_table def vector_embeddings_handler(text_input, search_query=""): """Show vector embeddings and semantic search capabilities.""" output_html = [] # Add result area container output_html.append('
Your text has been processed and converted into high-dimensional vector representations.
Characters
Text Segments
Vector Dimensions
Embedding Vectors
{sentence}
Search for content by meaning, not just keywords. The system will find the most semantically similar text segments.
Could not generate embeddings: {str(e)}
Vector embeddings are numerical representations of text that capture semantic meaning in high-dimensional space. They convert words, sentences, or documents into dense vectors where similar content has similar vector representations.
Our system uses the SentenceTransformer model to create embeddings that capture the semantic meaning of your text. The cosine similarity between vectors determines how related different pieces of content are, enabling powerful semantic search capabilities.