Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Semantic Search API
A production-ready semantic search service built with FastAPI. Upload your data (sequences + metadata), create embeddings automatically, and search using natural language queries.
Features
- Semantic/Latent Search: Find similar sequences based on meaning, not just keywords
- FastAPI Backend: Modern, fast, async Python web framework
- FAISS Index: Efficient similarity search at scale
- Sentence Transformers: State-of-the-art embedding models
- Beautiful UI: Dark-themed, responsive search interface
- CSV Upload: Easy data import via web interface or API
- Persistent Storage: Index persists across restarts
Quick Start
1. Install Dependencies
pip install -r requirements.txt
2. Run the Server
python app.py
# or
uvicorn app:app --reload --host 0.0.0.0 --port 8000
3. Open the UI
Navigate to http://localhost:8000 in your browser.
4. Upload Your Data
- Drag & drop a CSV file or click to browse
- Select the column containing your sequences
- Click "Create Index"
- Start searching!
Data Format
Your CSV should have at least one column containing the text sequences you want to search. All other columns become searchable metadata.
Example:
sequence,category,source,date
"Machine learning is transforming industries",tech,blog,2024-01-15
"The quick brown fox jumps over the lazy dog",example,pangram,2024-01-10
"Embeddings capture semantic meaning",ml,paper,2024-01-20
API Endpoints
Search
POST /api/search
Content-Type: application/json
{
"query": "artificial intelligence",
"top_k": 10
}
Upload CSV
POST /api/upload-csv?sequence_column=text
Content-Type: multipart/form-data
file: your_data.csv
Create Index (JSON)
POST /api/index
Content-Type: application/json
{
"sequence_column": "text",
"data": [
{"text": "Hello world", "category": "greeting"},
{"text": "Machine learning", "category": "tech"}
]
}
Get Stats
GET /api/stats
Get Sample
GET /api/sample?n=5
Delete Index
DELETE /api/index
Programmatic Usage
You can also create indexes directly from Python:
from create_index import create_index_from_dataframe, search_index
import pandas as pd
# Create your dataframe
df = pd.DataFrame({
'sequence': [
'The mitochondria is the powerhouse of the cell',
'DNA stores genetic information',
'Proteins are made of amino acids'
],
'category': ['biology', 'genetics', 'biochemistry'],
'difficulty': ['easy', 'medium', 'medium']
})
# Create the index
create_index_from_dataframe(df, sequence_column='sequence')
# Search
results = search_index("cellular energy production", top_k=3)
for r in results:
print(f"Score: {r['score']:.3f} | {r['sequence'][:50]}...")
Configuration
Edit these values in app.py to customize:
# Embedding model (from sentence-transformers)
EMBEDDING_MODEL = "all-MiniLM-L6-v2" # Fast, 384 dimensions
# Alternatives:
# "all-mpnet-base-v2" # Higher quality, 768 dimensions
# "paraphrase-multilingual-MiniLM-L12-v2" # Multilingual support
# "all-MiniLM-L12-v2" # Balanced quality/speed
Project Structure
semantic_search/
βββ app.py # FastAPI application
βββ create_index.py # Programmatic index creation
βββ requirements.txt # Python dependencies
βββ static/
β βββ index.html # Search UI
βββ data/ # Created at runtime
β βββ faiss.index # FAISS index file
β βββ metadata.pkl # DataFrame with metadata
β βββ embeddings.npy # Raw embeddings (optional)
βββ README.md
How It Works
- Embedding Creation: When you upload data, each sequence is converted to a dense vector (embedding) using a sentence transformer model
- FAISS Indexing: Embeddings are stored in a FAISS index optimized for similarity search
- Search: Your query is embedded using the same model, then FAISS finds the most similar vectors using cosine similarity
- Results: The original sequences and metadata are returned, ranked by similarity
Performance Tips
- Model Choice:
all-MiniLM-L6-v2is fast and good for most use cases. Useall-mpnet-base-v2for higher quality at the cost of speed. - Batch Size: For large datasets, the model processes in batches automatically
- GPU: If you have a CUDA-capable GPU, install
faiss-gpuinstead offaiss-cpufor faster indexing
License
MIT
- Downloads last month
- 72