kostas-p commited on
Commit
582dd5b
·
verified ·
1 Parent(s): c8b0935

Upload folder using huggingface_hub

Browse files
.claude/settings.local.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(python3:*)",
5
+ "Bash(pip3 install:*)",
6
+ "Bash(/Library/Developer/CommandLineTools/usr/bin/python3:*)"
7
+ ],
8
+ "deny": [],
9
+ "ask": []
10
+ }
11
+ }
.gitignore ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ env/
8
+ venv/
9
+ .venv
10
+ pip-log.txt
11
+ pip-delete-this-directory.txt
12
+
13
+ # Jupyter
14
+ .ipynb_checkpoints
15
+ *.ipynb
16
+
17
+ # IDE
18
+ .vscode/
19
+ .idea/
20
+ *.swp
21
+ *.swo
22
+ *~
23
+
24
+ # OS
25
+ .DS_Store
26
+ Thumbs.db
27
+
28
+ # Temporary files
29
+ *.tmp
30
+ *.bak
31
+ *.log
README.md CHANGED
@@ -1,3 +1,273 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - code
10
+ - api-documentation
11
+ - dataframe
12
+ - semantic-ai
13
+ - fenic
14
+ pretty_name: Fenic 0.4.0 API Documentation
15
+ size_categories:
16
+ - 1K<n<10K
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: api
21
+ path: "api_df.parquet"
22
+ - split: hierarchy
23
+ path: "hierarchy_df.parquet"
24
+ - split: summary
25
+ path: "fenic_summary.parquet"
26
  ---
27
+
28
+ # Fenic 0.4.0 API Documentation Dataset
29
+
30
+ ## Dataset Description
31
+
32
+ This dataset contains comprehensive API documentation for [Fenic 0.4.0](https://github.com/typedef-ai/fenic), a PySpark-inspired DataFrame framework designed for building production AI and agentic applications. The dataset provides structured information about all public and private API elements, including modules, classes, functions, methods, and attributes.
33
+
34
+ ### Dataset Summary
35
+
36
+ [Fenic](https://github.com/typedef-ai/fenic) is a DataFrame framework that combines traditional data processing capabilities with semantic/AI operations. It provides:
37
+ - A familiar DataFrame API similar to PySpark
38
+ - Semantic functions powered by LLMs (map, extract, classify, etc.)
39
+ - Integration with multiple AI model providers (Anthropic, OpenAI, Google, Cohere)
40
+ - Advanced features like semantic joins and clustering
41
+
42
+ The dataset captures the complete API surface of Fenic 0.4.0, making it valuable for:
43
+ - Code generation and understanding
44
+ - API documentation analysis
45
+ - Framework comparison studies
46
+ - Training models on DataFrame/data processing APIs
47
+
48
+ ## Dataset Structure
49
+
50
+ The dataset consists of three Parquet files:
51
+
52
+ ### 1. `api_df.parquet` (2,522 rows × 16 columns)
53
+ Main API documentation with detailed information about each API element.
54
+
55
+ **Columns:**
56
+ - `type`: Element type (module, class, function, method, attribute)
57
+ - `name`: Element name
58
+ - `qualified_name`: Fully qualified name (e.g., `fenic.api.dataframe.DataFrame`)
59
+ - `docstring`: Documentation string
60
+ - `filepath`: Source file path
61
+ - `is_public`: Whether the element is public
62
+ - `is_private`: Whether the element is private
63
+ - `line_start`: Starting line number in source
64
+ - `line_end`: Ending line number in source
65
+ - `annotation`: Type annotation
66
+ - `returns`: Return type annotation
67
+ - `parameters`: Function/method parameters
68
+ - `parent_class`: Parent class for methods
69
+ - `value`: Value for attributes
70
+ - `bases`: Base classes for class definitions
71
+ - `api_element_summary`: Formatted summary of the element
72
+
73
+ ### 2. `hierarchy_df.parquet` (2,522 rows × 18 columns)
74
+ Same as api_df but with additional hierarchy information.
75
+
76
+ **Additional Columns:**
77
+ - `path_parts`: List showing the hierarchical path
78
+ - `depth`: Depth in the API hierarchy
79
+
80
+ ### 3. `fenic_summary.parquet` (1 row × 1 column)
81
+ High-level project summary.
82
+
83
+ **Columns:**
84
+ - `project_summary`: Comprehensive description of the Fenic framework
85
+
86
+ ## Key API Components
87
+
88
+ ### Core DataFrame Operations
89
+ - Standard operations: `select`, `filter`, `join`, `group_by`, `agg`, `sort`
90
+ - Data conversion: `to_pandas()`, `to_polars()`, `to_arrow()`, `to_pydict()`, `to_pylist()`
91
+ - Lazy evaluation with logical query plans
92
+
93
+ ### Semantic Functions (`fenic.api.functions.semantic`)
94
+ - `map`: Apply generation prompts to columns
95
+ - `extract`: Extract structured data using Pydantic models
96
+ - `classify`: Text classification
97
+ - `predicate`: Boolean filtering with natural language
98
+ - `reduce`: Aggregate strings using natural language instructions
99
+ - `analyze_sentiment`: Sentiment analysis
100
+ - `summarize`: Text summarization
101
+ - `embed`: Generate embeddings
102
+
103
+ ### Advanced Features
104
+ - Semantic joins and clustering
105
+ - Model client integrations (Anthropic, OpenAI, Google, Cohere)
106
+ - Query optimization and execution planning
107
+ - MCP (Model-based Code Production) tool generation
108
+
109
+ ## Usage
110
+
111
+ ### Loading with Fenic (Recommended)
112
+
113
+ [Fenic](https://github.com/typedef-ai/fenic) natively supports loading datasets directly from Hugging Face using the `hf://` scheme:
114
+
115
+ ```python
116
+ import fenic as fc
117
+
118
+ # Create a Fenic session
119
+ session = fc.Session.get_or_create(
120
+ fc.SessionConfig(app_name="fenic_api_analysis")
121
+ )
122
+
123
+ # Load the API documentation split
124
+ api_df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet")
125
+
126
+ # Or load all splits at once
127
+ df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/*.parquet")
128
+
129
+ # Explore the dataset
130
+ api_df.show(5)
131
+ print(f"Total API elements: {api_df.count()}")
132
+ print(f"Schema: {api_df.schema}")
133
+
134
+ # Example: Find all public DataFrame methods
135
+ dataframe_methods = api_df.filter(
136
+ fc.col("qualified_name").contains("fenic.api.dataframe.DataFrame.") &
137
+ (fc.col("type") == "method") &
138
+ (fc.col("is_public") == True)
139
+ ).select("name", "docstring", "parameters", "returns")
140
+
141
+ dataframe_methods.show(10)
142
+
143
+ # Example: Find all semantic functions
144
+ semantic_functions = api_df.filter(
145
+ fc.col("qualified_name").contains("fenic.api.functions.semantic.") &
146
+ (fc.col("type") == "function")
147
+ ).select("name", "qualified_name", "docstring")
148
+
149
+ semantic_functions.show()
150
+
151
+ # Get statistics about the codebase
152
+ stats = api_df.group_by("type").agg(
153
+ fc.count("*").alias("count")
154
+ ).order_by(fc.col("count").desc())
155
+
156
+ print("\nAPI Element Statistics:")
157
+ stats.show()
158
+
159
+ # Search for specific functionality
160
+ embedding_apis = api_df.filter(
161
+ fc.col("name").contains("embed") |
162
+ fc.col("docstring").contains("embedding")
163
+ ).select("type", "qualified_name", "docstring")
164
+
165
+ print(f"\nFound {embedding_apis.count()} embedding-related APIs")
166
+ embedding_apis.show(5)
167
+ ```
168
+
169
+ ### Loading with Pandas
170
+
171
+ ```python
172
+ import pandas as pd
173
+ from datasets import load_dataset
174
+
175
+ # Option 1: Using Hugging Face datasets library
176
+ dataset = load_dataset("YOUR_USERNAME/fenic-api-0.4.0")
177
+
178
+ # Access different splits and convert to pandas
179
+ api_df = dataset['api'].to_pandas()
180
+ hierarchy_df = dataset['hierarchy'].to_pandas()
181
+ summary_df = dataset['summary'].to_pandas()
182
+
183
+ # Option 2: Direct parquet loading
184
+ api_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet')
185
+ hierarchy_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/hierarchy_df.parquet')
186
+ summary_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/fenic_summary.parquet')
187
+
188
+ # Example: Find all public DataFrame methods
189
+ dataframe_methods = api_df[
190
+ (api_df['qualified_name'].str.startswith('fenic.api.dataframe.DataFrame.')) &
191
+ (api_df['type'] == 'method') &
192
+ (api_df['is_public'] == True)
193
+ ]
194
+
195
+ print(f"Found {len(dataframe_methods)} DataFrame methods")
196
+ print(dataframe_methods[['name', 'docstring']].head(10))
197
+
198
+ # Example: Analyze module structure
199
+ modules = api_df[api_df['type'] == 'module']
200
+ print(f"\nTotal modules: {len(modules)}")
201
+ print("Top-level modules:")
202
+ print(modules[modules['qualified_name'].str.count('\.') == 1]['name'].unique())
203
+
204
+ # Example: Find all semantic functions
205
+ semantic_functions = api_df[
206
+ (api_df['qualified_name'].str.startswith('fenic.api.functions.semantic.')) &
207
+ (api_df['type'] == 'function')
208
+ ]
209
+
210
+ print(f"\nSemantic functions available:")
211
+ for _, func in semantic_functions.iterrows():
212
+ doc_first_line = func['docstring'].split('\n')[0] if pd.notna(func['docstring']) else "No description"
213
+ print(f" • {func['name']}: {doc_first_line}")
214
+ ```
215
+
216
+ ### Authentication for Private Datasets
217
+
218
+ If you're using a private dataset, set your Hugging Face token:
219
+
220
+ ```bash
221
+ export HF_TOKEN=your_token_here
222
+ ```
223
+
224
+ Or in Python:
225
+ ```python
226
+ import os
227
+ os.environ['HF_TOKEN'] = 'your_token_here'
228
+ ```
229
+
230
+ ## Dataset Creation
231
+
232
+ This dataset was automatically extracted from the Fenic 0.4.0 codebase using API documentation parsing tools. It captures the complete public and private API surface, including:
233
+ - All modules and submodules
234
+ - Classes with their methods and attributes
235
+ - Functions with their signatures
236
+ - Complete docstrings and type annotations
237
+
238
+ ## Considerations for Using the Data
239
+
240
+ ### Use Cases
241
+ - Training code generation models on DataFrame APIs
242
+ - Building API documentation search/retrieval systems
243
+ - Analyzing API design patterns in data processing frameworks
244
+ - Creating intelligent code completion for Fenic
245
+
246
+ ### Limitations
247
+ - This dataset represents a snapshot of Fenic 0.4.0 and may not reflect newer versions
248
+ - Some internal/private APIs may change between versions
249
+ - Generated protobuf files are included but may be less useful for learning
250
+
251
+ ## Additional Information
252
+
253
+ ### Project Links
254
+ - **Fenic Framework**: [https://github.com/typedef-ai/fenic](https://github.com/typedef-ai/fenic)
255
+ - **Documentation**: See the official repository for the latest documentation
256
+ - **Issues**: Report issues with the dataset or framework on the GitHub repository
257
+
258
+ ### Licensing
259
+ This dataset is released under the Apache 2.0 license, consistent with the Fenic framework's licensing.
260
+
261
+ ### Citation
262
+ If you use this dataset, please cite:
263
+ ```
264
+ @dataset{fenic_api_2025,
265
+ title={Fenic 0.4.0 API Documentation Dataset},
266
+ year={2025},
267
+ publisher={Hugging Face},
268
+ license={Apache-2.0}
269
+ }
270
+ ```
271
+
272
+ ### Maintenance
273
+ This dataset is a static snapshot of Fenic 0.4.0. For the latest API documentation and updates, refer to the official [Fenic repository](https://github.com/typedef-ai/fenic).
api_df.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a83511908fded48f6446c5f7fd2479e5059c1c40df5ed57e98d8f943a584c0b
3
- size 465822
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb9c090ba3e2377533f7ebee9e250cc5ea164962d29647a2a5180757886a26f6
3
+ size 465818
fenic_api.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Dataset loading script for Fenic 0.4.0 API Documentation.
3
+
4
+ This script can be used with the Hugging Face datasets library to load the dataset.
5
+ """
6
+
7
+ import pandas as pd
8
+ from datasets import Dataset, DatasetDict, Features, Value, Sequence
9
+
10
+
11
+ def load_fenic_api_dataset(data_dir="."):
12
+ """
13
+ Load the Fenic API documentation dataset.
14
+
15
+ Args:
16
+ data_dir: Directory containing the parquet files
17
+
18
+ Returns:
19
+ DatasetDict with three splits: api, hierarchy, and summary
20
+ """
21
+
22
+ # Load the parquet files
23
+ api_df = pd.read_parquet(f"{data_dir}/api_df.parquet")
24
+ hierarchy_df = pd.read_parquet(f"{data_dir}/hierarchy_df.parquet")
25
+ summary_df = pd.read_parquet(f"{data_dir}/fenic_summary.parquet")
26
+
27
+ # Convert DataFrames to Hugging Face Datasets
28
+ api_dataset = Dataset.from_pandas(api_df)
29
+ hierarchy_dataset = Dataset.from_pandas(hierarchy_df)
30
+ summary_dataset = Dataset.from_pandas(summary_df)
31
+
32
+ # Create a DatasetDict
33
+ dataset_dict = DatasetDict({
34
+ 'api': api_dataset,
35
+ 'hierarchy': hierarchy_dataset,
36
+ 'summary': summary_dataset
37
+ })
38
+
39
+ return dataset_dict
40
+
41
+
42
+ def get_dataframe_methods(dataset):
43
+ """
44
+ Get all DataFrame methods from the dataset.
45
+
46
+ Args:
47
+ dataset: The loaded Fenic API dataset
48
+
49
+ Returns:
50
+ Filtered dataset containing only DataFrame methods
51
+ """
52
+ api_data = dataset['api']
53
+
54
+ # Filter for DataFrame methods
55
+ df_methods = []
56
+ for item in api_data:
57
+ if (item['qualified_name'] and
58
+ 'fenic.api.dataframe.DataFrame.' in item['qualified_name'] and
59
+ item['type'] == 'method' and
60
+ item['is_public']):
61
+ df_methods.append(item)
62
+
63
+ return Dataset.from_list(df_methods)
64
+
65
+
66
+ def get_semantic_functions(dataset):
67
+ """
68
+ Get all semantic functions from the dataset.
69
+
70
+ Args:
71
+ dataset: The loaded Fenic API dataset
72
+
73
+ Returns:
74
+ Filtered dataset containing only semantic functions
75
+ """
76
+ api_data = dataset['api']
77
+
78
+ # Filter for semantic functions
79
+ semantic_funcs = []
80
+ for item in api_data:
81
+ if (item['qualified_name'] and
82
+ 'fenic.api.functions.semantic.' in item['qualified_name'] and
83
+ item['type'] == 'function'):
84
+ semantic_funcs.append(item)
85
+
86
+ return Dataset.from_list(semantic_funcs)
87
+
88
+
89
+ if __name__ == "__main__":
90
+ # Example usage
91
+ dataset = load_fenic_api_dataset()
92
+
93
+ print("Dataset loaded successfully!")
94
+ print(f"API entries: {len(dataset['api'])}")
95
+ print(f"Hierarchy entries: {len(dataset['hierarchy'])}")
96
+ print(f"Summary entries: {len(dataset['summary'])}")
97
+
98
+ # Get DataFrame methods
99
+ df_methods = get_dataframe_methods(dataset)
100
+ print(f"\nDataFrame methods found: {len(df_methods)}")
101
+
102
+ # Get semantic functions
103
+ semantic_funcs = get_semantic_functions(dataset)
104
+ print(f"Semantic functions found: {len(semantic_funcs)}")
fenic_summary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f38b7ee7b32748b2dc204503fe533347ccc426fdf9153f408cd5e10c4a582f5
3
- size 4313
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33595fbb42d0239e195216b149a653172bf08087f4b4f1086e177c32abfb68eb
3
+ size 5195
hierarchy_df.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6777a4f867522ffb1d0b82deffd7e4df2a4365786ebade763578589653cf07e9
3
- size 499075
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00b47451e3caedc9ce486577edf27e75837cade7b21c1ec2f1ff76c18a38521c
3
+ size 495178