repo_name stringlengths 9 75 | topic stringclasses 30 values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2 values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
modelscope/data-juicer | streamlit | 218 | 3sigma在alphanumeric_filter中是如何计算的? | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
Question: 都是英文的数据,请问下面的min_ratio、max_ratio值是怎么计算出来的?是与数量有关吗?为什么下面两种菜谱要设置完全不同范围的参数呢?
在菜谱redpajama-pile-stackexchange-refine.yaml中,给出的参数为:
- alphanumeric_filter:
tokenization: false
min_ratio: **0.35** # <3sigma
max_ratio: **0.943** # 3sigma
在菜谱redpajama-wiki-refine.yaml中,给出的参数为:
- alphanumeric_filter:
tokenization: false
min_ratio: **0.6** # <3sigma (0.735)
max_ratio: **0.884** # 3sigma
### Additional 额外信息
_No response_ | closed | 2024-02-23T08:05:00Z | 2024-03-24T09:32:03Z | https://github.com/modelscope/data-juicer/issues/218 | [
"question",
"stale-issue"
] | echo-valor | 3 |
sunscrapers/djoser | rest-api | 63 | Move documentation to Read the Docs | closed | 2015-07-19T16:53:36Z | 2023-04-15T13:50:42Z | https://github.com/sunscrapers/djoser/issues/63 | [
"enhancement"
] | haxoza | 4 | |
ploomber/ploomber | jupyter | 935 | adding interactivity to documentation examples | I came across [thebe](https://github.com/executablebooks/thebe), a project that allows to create runnable code snippets in HTML documents. We can use this to allow visitors to our documentation to run ploomber examples. It isn't clear yet which are good use cases, since ploomber is a framework to build projects (multiple files) instead of a library where we can show something meaningful with a short snippet.
Here are some things we could showcase:
* dag caching (demonstrate how we save iteration time by caching results)
* parallelization (how to we parallelize notebook execution)
| closed | 2022-07-23T03:49:15Z | 2022-08-29T19:30:37Z | https://github.com/ploomber/ploomber/issues/935 | [
"documentation"
] | edublancas | 1 |
ultralytics/yolov5 | pytorch | 13,273 | Installation on Windows 7 32 bits | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, may I know is yolov5 able to install on windows 7 32 bits?
### Additional
_No response_ | open | 2024-08-23T03:12:47Z | 2024-08-23T18:28:27Z | https://github.com/ultralytics/yolov5/issues/13273 | [
"question"
] | KnightInsight | 3 |
ets-labs/python-dependency-injector | flask | 797 | Package level decoupling with FastAPI | version: 4.41.0
fastapi version: 0.110.2
In the project I developed with Fastapi, I want to use di as decoupled example at the package level.
I have OwnerContainer in owner package and i have ApplicationContainer in top level main package.
```
class OwnerContainer(containers.DeclarativeContainer):
db_session = providers.Dependency()
owner_service = providers.Factory(
OwnerService,
db_session=db_session,
)
```
```
class ApplicationContainer(containers.DeclarativeContainer):
config = providers.Configuration()
database_session = providers.Resource(
get_db_session
)
owner_package = providers.Container(
OwnerContainer,
db_session=database_session
)
```
I want to use OwnerService in my owner router
```
@owner_router.get("/", response_model=List[GetOwnerSchema])
@inject
async def get_owners(service: OwnerService = Depends(Provide[OwnerContainer.owner_service])):
try:
result: List[GetOwnerSchema] = await service.get_owners()
return result
except Exception as e:
print(e)
raise HTTPException(status_code=500)
```
I got error **_'Provide' object has no attribute 'get_owners'_**
According to my research, this is related to the dependency injector not being able to wire. I can't figure out whether this is related to a bug or whether I made a mistake, but I think I used it in accordance with the examples.
Has anyone encountered this type of problem? I need some help. | open | 2024-05-01T20:03:41Z | 2024-09-17T08:33:18Z | https://github.com/ets-labs/python-dependency-injector/issues/797 | [] | ilterisYucel | 7 |
sigmavirus24/github3.py | rest-api | 376 | Switch to using descriptors | See https://www.youtube.com/watch?v=h2-WPwGnHqE
We should probably determine if this is worth it based on performance. The descriptor I'm thinking of would be something like
``` py
class Repository(...):
owner = GitHubDescriptor('owner', convert_to=github3.users.User)
name = GitHubDescriptor('name')
# ...
```
Using descriptors means we will only have to update the parsed JSON on the object when we update it instead of re-initializing all of the attributes by calling an update method. On the other hand, to take advantage of that, we will be creating more objects more often, so that would be bad. We could arbitrarily decide to cache some of the attributes but that seems bad.
In short, anything we do with this, we should probably benchmark to see if and how it negatively affects performance (which I suspect it will).
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/12186330-switch-to-using-descriptors?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2015-04-26T18:25:40Z | 2018-03-22T02:49:10Z | https://github.com/sigmavirus24/github3.py/issues/376 | [
"Request For Comments",
"Needs More Info"
] | sigmavirus24 | 1 |
Python3WebSpider/ProxyPool | flask | 91 | redis.exceptions.RedisError: ZADD requires an equal number of values and scores | redis版本 3.4 是环境问题吗 | closed | 2020-09-01T03:23:37Z | 2021-03-14T18:44:37Z | https://github.com/Python3WebSpider/ProxyPool/issues/91 | [] | qqliunian2001 | 1 |
iperov/DeepFaceLab | machine-learning | 5,564 | Effect of learning rate and batch size on training time | @iperov First of all thanks for this amazing repo.
I have been working around this project for a while, and running my experiments on A6000 GPU, which supports maximum of 64 batch size. I want to decease the training time for RTM models. I did some experiments but the results are not that good. I had doubts regarding:
1. Can I increase the learning rate initially (like 8x) and decrease it by half every-time when I delete the inter_AB file ( I delete inter_AB when my loss stops decreasing) ?
2. Can I use multiple GPUs (2-4 ) to increase the batch size and assume that the iterations to delete inter_AB file will also decrease?
3. Is there any other way than the number of iterations to decide when to delete the inter_AB file?
4. Also I am having some wobbling issues with the face when there are extreme camera motions ( like zooming or when the subject comes close to the camera). Does turning off Random Warp after the 4 deletes of inter_AB will decrease this issue?
I am open for any suggestion to decrease the training time of a RTM model.
Thanks in advance.
| open | 2022-09-19T14:41:50Z | 2023-06-08T23:19:05Z | https://github.com/iperov/DeepFaceLab/issues/5564 | [] | Shivank1006 | 1 |
scikit-learn-contrib/metric-learn | scikit-learn | 140 | Fix some PEP8 errors | PR #139 may introduce some PEP8 erros, that we should fix. The goal here is not to fix all PEP8 errors of the package to avoid overwriting a lot the history, but just PEP8 errors from the merge. | closed | 2019-01-02T13:20:19Z | 2019-09-03T07:54:55Z | https://github.com/scikit-learn-contrib/metric-learn/issues/140 | [] | wdevazelhes | 1 |
blacklanternsecurity/bbot | automation | 2,257 | Consider using ansible-core instead of ansible to reduce install size significantly | Trying to include `bbot` in my Docker image, I noticed the primal size factor was the Ansible installation, a.k.a the `ansible` package:


I'm not sure we need all the ansible collections like fortinet etc... so maybe we can yank the overall size by yanking the collections we don't need.
I think we can switch to the `ansible-core` package.
Sure, 462MB is not that big but when you want a slim Docker image that can run in constrained environments like RPi, it can be... | open | 2025-02-07T09:52:38Z | 2025-02-10T20:21:09Z | https://github.com/blacklanternsecurity/bbot/issues/2257 | [
"enhancement"
] | ocervell | 0 |
robotframework/robotframework | automation | 4,663 | `BuiltIn.Log` documentation contains a defect | This is rather a typo more than a bug, but in the very last line of docstring for `BuiltIn.Log` Keyword, there is an annotation that one can choose between `type` and `log` formatter options.
Looking at `_get_formatter` method ( [HERE](https://github.com/robotframework/robotframework/blob/572f1e412a14c125e2098ce79e37dd45c2c0f990/src/robot/libraries/BuiltIn.py#LL3019) ), there is no such key in the dictionary. I believe that the latter option should be `len`.
Robot Framework version: 6.0.2

| closed | 2023-02-22T07:05:14Z | 2023-03-15T12:51:53Z | https://github.com/robotframework/robotframework/issues/4663 | [
"bug",
"priority: low",
"alpha 1",
"effort: small"
] | alagierx | 1 |
akanz1/klib | data-visualization | 243 | switch to UV for venv and dependency management | closed | 2024-07-28T13:17:41Z | 2024-10-12T12:49:58Z | https://github.com/akanz1/klib/issues/243 | [
"enhancement"
] | akanz1 | 0 | |
microsoft/nni | deep-learning | 5,648 | no module named nni.algorithms.compression | **Describe the issue**:
from nni.algorithms.compression.v2.pytorch import TorchEvaluator gives an import error: no module named nni.algorithms.compression
I have nni 999.dev0 version installed on my jetson orin
**Environment**:
- NNI version: 999.dev0
| open | 2023-07-20T15:51:02Z | 2023-08-11T06:57:02Z | https://github.com/microsoft/nni/issues/5648 | [] | ankitknitj | 2 |
seleniumbase/SeleniumBase | web-scraping | 2,570 | "PermissionError: [Errno 13] Permission denied" while trying to patch uc_driver | ```
Downloading chromedriver-win64.zip from:
https://storage.googleapis.com/chrome-for-testing-public/122.0.6261.94/win64/chromedriver-win64.zip ...
Download Complete!
Extracting ['chromedriver.exe'] from chromedriver-win64.zip ...
[ERROR] Error al guardar captura de pantalla
[ERROR] Error desconocido durante la ejecucion
- Traceback (most recent call last):
- File "C:\Cp\bot_garena\bot_garena\bot_garena.py", line 764, in bot_garena
- instance = next(iterator, None)
- ^^^^^^^^^^^^^^^^^^^^
- File "C:\Cp\bot_garena\bot_garena\bot_garena.py", line 686, in nav_code
- raise e
- File "C:\Cp\bot_garena\bot_garena\bot_garena.py", line 571, in nav_code
- driver_handle = WebDriver()
- ^^^^^^^^^^^
- File "C:\Cp\bot_garena\bot_garena\bot_garena.py", line 452, in __init__
- self.driver = Driver(**params_driver)
- ^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Python312\Lib\site-packages\seleniumbase\plugins\driver_manager.py", line 488, in Driver
- driver = browser_launcher.get_driver(
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- File "C:\Python312\Lib\site-packages\seleniumbase\core\browser_launcher.py", line 1612, in get_driver
- return get_local_driver(
- ^^^^^^^^^^^^^^^^^
- File "C:\Python312\Lib\site-packages\seleniumbase\core\browser_launcher.py", line 3510, in get_local_driver
- driver = undetected.Chrome(
- ^^^^^^^^^^^^^^^^^^
- File "C:\Python312\Lib\site-packages\seleniumbase\undetected\__init__.py", line 130, in __init__
- self.patcher.auto()
- File "C:\Python312\Lib\site-packages\seleniumbase\undetected\patcher.py", line 87, in auto
- return self.patch_exe()
- ^^^^^^^^^^^^^^^^
- File "C:\Python312\Lib\site-packages\seleniumbase\undetected\patcher.py", line 211, in patch_exe
- with io.open(self.executable_path, "r+b") as fh:
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- PermissionError: [Errno 13] Permission denied: 'C:\\Python312\\Lib\\site-packages\\seleniumbase\\drivers\\uc_driver.exe'
``` | closed | 2024-03-06T21:39:30Z | 2024-03-06T22:50:14Z | https://github.com/seleniumbase/SeleniumBase/issues/2570 | [
"duplicate",
"UC Mode / CDP Mode"
] | boludoz | 5 |
NullArray/AutoSploit | automation | 374 | Unhandled Exception (4557004b7) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali1-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Zof/Tools/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Zof/Tools/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-01-18T10:22:17Z | 2019-04-18T17:32:00Z | https://github.com/NullArray/AutoSploit/issues/374 | [] | AutosploitReporter | 0 |
OpenGeoscience/geonotebook | jupyter | 150 | Displaying Vector Data | This isn't really an issue, but how would I display a series of X,Y coordinates of points? Would it be possible to add a pop up window to the display?
The docs go into depth about displaying raster data but don't really cover vector data:
`from geonotebook.wrappers import RasterData
rd = RasterData('file:///path/to/file.tiff')
M.add_layer(rd[1, 2, 3], opacity=1.0, gamma=2.5)
`
| open | 2017-10-12T14:29:53Z | 2017-11-01T23:47:43Z | https://github.com/OpenGeoscience/geonotebook/issues/150 | [] | ghost | 11 |
JaidedAI/EasyOCR | machine-learning | 1,106 | My_first_lang is only compatible with English??? | --------mycode:
reader = Reader(['en'],recog_network='my_first_lang',model_storage_directory=basepath+'/model',user_network_directory=basepath+'/user_network');
--------file:

| open | 2023-08-07T13:26:50Z | 2023-09-25T22:12:50Z | https://github.com/JaidedAI/EasyOCR/issues/1106 | [] | xiayuer0114 | 1 |
kornia/kornia | computer-vision | 2,250 | Add calibration from vanishing points | ## 🚀 Feature
Now, when we have SOLD2 detector, we can play with lines. What would be required to get VP camera calibration?
1) VP-RANSAC. Or we can accept 3rd party input for the beginning
2) estimation of camera intrinsics from 3 vanishing points (straightforward)
Example of code: https://github.com/towardsautonomy/cam_intrinsic_calibration_single_image
https://github.com/ucuapps/single-view-autocalib
Reference: http://www.bmva.org/bmvc/1999/papers/38.pdf
https://www.cs.princeton.edu/courses/archive/fall13/cos429/lectures/11-epipolar
| open | 2023-03-02T14:45:34Z | 2023-03-02T14:51:22Z | https://github.com/kornia/kornia/issues/2250 | [
"help wanted"
] | ducha-aiki | 0 |
errbotio/errbot | automation | 1,092 | Hipchat backend frequently hits the rate-limit | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [x] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 5.1.2
* OS version: CentOS 7.3
* Python version: 3.6
* Using a virtual environment: yes
### Issue description
When using the HipChat backend, errbot quickly hits the API rate limit when joining a room
### Steps to reproduce
Configure errbot to connect to a room in HipChat
### Additional info
here are some logs indicative of the "flood" of API requests causing errbot to be rate-limited and eventually disconnect. They aren't running quite exactly back to back... but definitely more than frequently enough to hit the hipchat API limit (which a quick google indicates might be as low as 100/5min
```2017-08-28 17:34:48,587 DEBUG urllib3.connectionpool https://api.hipchat.com:443 "GET /v2/user?include-guests=true HTTP/1.1" 200 3472
2017-08-28 17:34:48,740 DEBUG urllib3.connectionpool https://api.hipchat.com:443 "GET /v2/user?include-guests=true&start-index=100&max-results=100 HTTP/1.1" 200 3810
2017-08-28 17:34:48,881 DEBUG urllib3.connectionpool https://api.hipchat.com:443 "GET /v2/user?include-guests=true&start-index=200&max-results=100 HTTP/1.1" 200 1021
2017-08-28 17:34:48,985 DEBUG urllib3.connectionpool https://api.hipchat.com:443 "GET /v2/user/USER_NUMBER HTTP/1.1" 200 485```
It looks like errbot might not be properly caching perhaps?
| closed | 2017-08-28T22:17:30Z | 2019-01-05T16:31:53Z | https://github.com/errbotio/errbot/issues/1092 | [] | cryptk | 6 |
flasgger/flasgger | flask | 357 | Data Validation not taking place with uiversion 3 and openAPI 3.0.1 | I'm working on simultaneously refactoring and converting an old project from Swagger 2 to the new openAPI 3. The swagger doc renders fine, but if I attempt to make any requests with intentionally malformed content the validation errors are no longer thrown and all requests seems to successfully go through. It seems with this module's openAPI 3 implementation nothing is validated against the swagger spec.
Below is a bare-bones version of my code
```python3
from flasgger import Swagger, LazyString, LazyJSONEncoder
from flask import Flask, url_for
from flask_restful import Api, Resource
app = Flask(__name__)
api = Api(app)
app.json_encoder = LazyJSONEncoder
app.config['SWAGGER'] = {
'title': 'RencomAPI',
'uiversion': 3,
'openapi': '3.0.1',
'favicon': LazyString(lambda: url_for('static', filename='logo.png')),
'swagger_ui_css': LazyString(lambda: url_for('static', filename='swagger-ui.css')),
'specs_route': '/'
}
swagger = Swagger(app, template_file='static/Swagger.yml', parse=True)
class NewProduct(Resource):
# Create a new product
def post(self):
pass
api.add_resource(NewProduct, '/product')
if __name__ == "__main__":
app.run(debug=True)
```
And my Swagger spec file
```yaml
openapi: 3.0.1
info:
title: Proof-of-Concept
servers:
- url: http://127.0.0.1:5000/
- url: https://127.0.0.1:5000/
paths:
/product:
post:
tags:
- Product
summary: Add a new product
description: Create new product listings.
operationId: addProduct
requestBody:
description: Product object to be added to your catalog
content:
application/json:
schema:
$ref: '#/components/schemas/RawItem'
required: true
responses:
200:
description: Product created
content: {}
405:
description: Invalid input
content: {}
501:
description: Not Yet Implemented
content: {}
x-codegen-request-body-name: body
components:
schemas:
RawItem:
type: object
required:
- upc
properties:
id:
type: integer
format: int32
itemNumber:
type: string
description: Your unique code pertaining only to this product
example: 1006-10
upc:
type: integer
description: Universal Product Code
format: int32
``` | closed | 2020-01-22T21:47:37Z | 2021-11-23T10:58:05Z | https://github.com/flasgger/flasgger/issues/357 | [] | caffeinatedMike | 2 |
microsoft/unilm | nlp | 1,157 | BEIT-2: why use ViT-B instead of ViT-L | **Describe**
Model I am using BEIT-2:
I read in the paper that all versions of tokenizer using ViT-B. I'm curious if you have tried training with ViT-L? Is it possible to get a better tokenizer? | closed | 2023-06-27T09:43:45Z | 2023-06-27T13:25:14Z | https://github.com/microsoft/unilm/issues/1157 | [] | yzy-thu | 2 |
kymatio/kymatio | numpy | 301 | I want to use the wavelet scattering to fusion image, but I don't know how can I inverse transformation after fusion | I want to use the wavelet scattering to fusion image, but I don't know how can I inverse transformation after fusion @ | closed | 2019-01-16T06:38:02Z | 2019-01-22T21:37:34Z | https://github.com/kymatio/kymatio/issues/301 | [
"question"
] | strawberry1996 | 15 |
JaidedAI/EasyOCR | deep-learning | 1,378 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! | Filtering the images containing characters which are not in opt.character
Filtering the images whose label is longer than opt.batch_max_length
--------------------------------------------------------------------------------
dataset_root: all_data
opt.select_data: ['all_data']
opt.batch_ratio: ['1']
--------------------------------------------------------------------------------
dataset_root: all_data dataset: all_data
all_data/en_sample
sub-directory: /en_sample num samples: 882
all_data/rec\test
sub-directory: /rec\test num samples: 0
all_data/rec\train
sub-directory: /rec\train num samples: 0
all_data/rec\val
sub-directory: /rec\val num samples: 0
num total samples of all_data: 882 x 1.0 (total_data_usage_ratio) = 882
num samples of all_data per batch: 10 x 1.0 (batch_ratio) = 10
--------------------------------------------------------------------------------
Total_batch_size: 10 = 10
--------------------------------------------------------------------------------
dataset_root: all_data/en_sample dataset: /
all_data/en_sample/
sub-directory: /. num samples: 882
--------------------------------------------------------------------------------
...
---------------------------------------
continue to train, start_iter: 300000
training time: 11.559250354766846
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?21978a74-9e14-4883-bf7b-f33cc19f5f0d) or open in a [text editor](command:workbench.action.openLargeOutput?21978a74-9e14-4883-bf7b-f33cc19f5f0d). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[6], [line 2](vscode-notebook-cell:?execution_count=6&line=2)
[1](vscode-notebook-cell:?execution_count=6&line=1) opt = get_config("config_files/en_fine_tunning_config.yaml")
----> [2](vscode-notebook-cell:?execution_count=6&line=2) train(opt, amp=False)
File c:\Users\mengfoong\Desktop\Train_Docling_2\EasyOCR-Trainer\train.py:233, in train(opt, show_number, amp)
[230](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:230) model.eval()
[231](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:231) with torch.no_grad():
[232](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:232) valid_loss, current_accuracy, current_norm_ED, preds, confidence_score, labels,\
--> [233](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:233) infer_time, length_of_data = validation(model, criterion, valid_loader, converter, opt, device)
[234](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:234) model.train()
[235](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/train.py:235) print(infer_time, length_of_data)
File c:\Users\mengfoong\Desktop\Train_Docling_2\EasyOCR-Trainer\test_1.py:45, in validation(model, criterion, evaluation_loader, converter, opt, device)
[43](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:43) preds_size = torch.IntTensor([preds.size(1)] * batch_size)
[44](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:44) # permute 'preds' to use CTCloss format
---> [45](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:45) cost = criterion(preds.log_softmax(2).permute(1, 0, 2), text_for_loss, preds_size, length_for_loss)
[47](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:47) if opt.decode == 'greedy':
[48](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:48) # Select max probabilty (greedy decoding) then decode index to character
[49](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/EasyOCR-Trainer/test_1.py:49) _, preds_index = preds.max(2)
File c:\Users\mengfoong\Desktop\Train_Docling_2\venv\Lib\site-packages\torch\nn\modules\module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
[1737](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/venv/Lib/site-packages/torch/nn/modules/module.py:1737) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
[1738](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/venv/Lib/site-packages/torch/nn/modules/module.py:1738) else:
...
[3085](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/venv/Lib/site-packages/torch/nn/functional.py:3085) _Reduction.get_enum(reduction),
[3086](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/venv/Lib/site-packages/torch/nn/functional.py:3086) zero_infinity,
[3087](file:///C:/Users/mengfoong/Desktop/Train_Docling_2/venv/Lib/site-packages/torch/nn/functional.py:3087) )
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and CPU!
I'm facing this error when I run the cell, anyone can share how you resolved this ?
| open | 2025-02-19T10:06:04Z | 2025-02-19T10:06:04Z | https://github.com/JaidedAI/EasyOCR/issues/1378 | [] | MengFoong | 0 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 98 | Add ability to disable relationships | The include_fk option is really nice when producing flat serializations (non-nested). However, there's the issue of the foreign keys showing up twice: once as the proper foreign key name, and once as the relationship from the sqlalchemy model.
Please add either another option to exclude relationships or to have the include-fk option automatically get rid of simple relationship attributes that are just wrapping a foreign key | closed | 2016-11-02T18:30:23Z | 2021-06-13T23:43:38Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/98 | [] | dusktreader | 11 |
biolab/orange3 | data-visualization | 6,023 | LSOpenURLsWithRole() failed with error -10810 for the file /Applications/Orange3.app | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
```
$ open /Applications/Orange3.app
LSOpenURLsWithRole() failed with error -10810 for the file /Applications/Orange3.app.
```
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
Download Orange3-3.32.0-Python3.8.8.dmg from https://orangedatamining.com/download/#macos
double click downloaded *.dmg file, then put in /Applications
open terminal then execute
`open /Applications/Orange3.app`
if you double click /Applications/Orange3.app from Finder, nothing happen
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
```
$ system_profiler SPSoftwareDataType
Software:
System Software Overview:
System Version: macOS 10.12.6 (16G2136)
Kernel Version: Darwin 16.7.0
```
- Orange version:
Orange3-3.32.0-Python3.8.8.dmg
- How you installed Orange:
From *.dmg file above then put the Orange3.app to /Applications/
| closed | 2022-06-14T01:29:10Z | 2022-10-05T08:22:11Z | https://github.com/biolab/orange3/issues/6023 | [
"bug report"
] | sugizo | 4 |
huggingface/datasets | numpy | 6,554 | Parquet exports are used even if revision is passed | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | closed | 2024-01-03T11:32:26Z | 2024-02-02T10:35:29Z | https://github.com/huggingface/datasets/issues/6554 | [
"bug"
] | albertvillanova | 1 |
tqdm/tqdm | pandas | 1,468 | When using pdsh, the tqdm progress bar cannot be displayed normally | - [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [x] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
4.65.0 3.9.16 (main, Mar 8 2023, 14:00:05)
[GCC 11.2.0] linux
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2023-04-16T04:44:45Z | 2023-04-16T04:45:30Z | https://github.com/tqdm/tqdm/issues/1468 | [] | yrqUni | 0 |
benbusby/whoogle-search | flask | 1,188 | [QUESTION] Docker problems after recent update. | I am sure that this is likely something that I have caused, but I noticed this problem started happening after the 9.0 update release under docker. I have re-downloaded the docker-compose-traefik.yaml and and env files and added settings to make letsencrypt use dns challenge. It worked flawlessly until 9.0 and now it gives me a 404 error. What happened?
I have completely deleted the previous containers and stored data and recreated everything again and still have the same issue, can someone look over my config and logs and see what is going on?
[logs.txt](https://github.com/user-attachments/files/17405453/logs.txt)
[whoogle.env.txt](https://github.com/user-attachments/files/17405456/whoogle.txt)
[docker-compose.yaml.txt](https://github.com/user-attachments/files/17405589/docker-compose.txt)
| open | 2024-10-17T03:06:43Z | 2024-10-17T03:22:31Z | https://github.com/benbusby/whoogle-search/issues/1188 | [
"question"
] | MaverickTN | 0 |
serpapi/google-search-results-python | web-scraping | 15 | {'error':'We couldn't find your API Key.'} | `from serpapi.google_search_results import GoogleSearchResults
client = GoogleSearchResults({"q": "coffee", "serp_api_key": "************************"})
result = client.get_dict()`
I tried giving my API key from serpstack. Yet I am left with this error. Any help could be much useful. | closed | 2020-04-20T18:45:05Z | 2020-06-30T13:46:52Z | https://github.com/serpapi/google-search-results-python/issues/15 | [] | rokintech | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,969 | Add type annotation to the Unicode class. | ### Ensure stubs packages are not installed
- [X] No sqlalchemy stub packages is installed (both `sqlalchemy-stubs` and `sqlalchemy2-stubs` are not compatible with v2)
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
As per discussion & comment https://github.com/sqlalchemy/sqlalchemy/discussions/10950#discussioncomment-8368380 the type annotations for `Unicode` are missing.
### To Reproduce
```python
import sqlalchemy as sa
class Base(sa.orm.DeclarativeBase):
pass
class Table(Base):
name: sa.orm.Mapped[str] = sa.orm.mapped_column(sa.Unicode(), primary_key=True)
```
### Error
```
unicode.py:9:53: error: Call to untyped function "Unicode" in typed context [no-untyped-call]
name: sa.orm.Mapped[str] = sa.orm.mapped_column(sa.Unicode(), primary_key=True)
^~~~~~~~~~~~
```
### Versions
- OS: macOS Ventura 13.6.4, Darwin Kernel Version 22.6.0
- Python: Python 3.10.13
- SQLAlchemy: 2.0.25
- Type checker: mypy 1.8.0 (compiled: yes)
### Additional context
_No response_ | closed | 2024-02-06T02:28:11Z | 2024-02-07T06:39:55Z | https://github.com/sqlalchemy/sqlalchemy/issues/10969 | [
"PRs (with tests!) welcome",
"typing"
] | jenstroeger | 2 |
cvat-ai/cvat | tensorflow | 8,400 | Attribute-annotation, shortcut key 9 not working anymore | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Using the Attribute annotation mode with a lot of attributes (e.g. using keys 0-9), updating the attribute works perfectly fine for key 0-8. But using key number 9 will not update the attribute.
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.17.0
Core version: 15.1.2
Canvas version: 2.20.9
UI version: 1.65.0
```
| closed | 2024-09-04T19:49:38Z | 2024-09-09T08:30:55Z | https://github.com/cvat-ai/cvat/issues/8400 | [
"bug"
] | Casper-lu | 5 |
pallets/flask | flask | 5,381 | Session data is not untagged properly when using other JSON providers | Providers such as `orjson` and `ujson` do not implement `object_hook`. The "tagged JSON" scheme used to encode types for session data currently calls `loads(data, object_hook=...)`, so providers that ignore that option return the data still tagged. Untagging needs to be implemented without using `object_hook`. | closed | 2024-01-15T15:38:32Z | 2024-01-30T00:05:37Z | https://github.com/pallets/flask/issues/5381 | [] | davidism | 0 |
wkentaro/labelme | computer-vision | 1,246 | 无法使用labelme_json_to_dataset | ### Provide environment information
python3.11
pip list
labelme 5.1.1
labelme 5.1.1
lazy_loader 0.1
matplotlib 3.7.1
natsort 8.3.1
networkx 3.0
numpy 1.24.2
opencv-python 4.7.0.72
packaging 23.0
Pillow 9.4.0
pip 23.0.1
pluggy 1.0.0
pygame 2.3.0
pyparsing 3.0.9
PyQt5 5.15.9
PyQt5-Qt5 5.15.2
PyQt5-sip 12.11.1
### What OS are you using?
windows 10
### Describe the Bug
使用json_to_dataset出现如下错误
This script is aimed to demonstrate how to convert the JSON file to a single image dataset.
It won't handle multiple JSON files to generate a real-use dataset.
Traceback (most recent call last):
File "C:\Users\86158\AppData\Local\Programs\Python\Python311\Scripts\labelme_json_to_dataset-script.py", line 33, in <module>
sys.exit(load_entry_point('labelme==5.1.1', 'console_scripts', 'labelme_json_to_dataset')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\86158\AppData\Local\Programs\Python\Python311\Lib\site-packages\labelme-5.1.1-py3.11.egg\labelme\cli\json_to_dataset.py", line 39, in main
data = json.load(open(json_file))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\86158\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 293, in load
return loads(fp.read(),
^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0xaa in position 5714: illegal multibyte sequence
### Expected Behavior
正常转换
### To Reproduce
_No response_ | open | 2023-03-18T05:33:22Z | 2023-03-18T05:33:22Z | https://github.com/wkentaro/labelme/issues/1246 | [
"issue::bug"
] | wwmwmwm | 0 |
flairNLP/flair | pytorch | 3,513 | [Bug]: Can't load older models using Byte-Pair embeddings since flair 0.14 | ### Describe the bug
This commit https://github.com/flairNLP/flair/commit/f1a4d963f8b6851dbfb7397495f015e9363738b6 causes an error when trying to load a model using byte-pair embeddings trained with older flair versions.
### To Reproduce
```python
from flair.models import SequenceTagger
model = SequenceTagger.load("path/to/your/model") # model trained using byte-pair embeddings
```
### Expected behavior
Model should load properly
### Logs and Stack traces
```stacktrace
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[44], line 1
----> 1 model2 = SequenceTagger.load("new_categories_model.pt")
File ~\AppData\Local\anaconda3\lib\site-packages\flair\models\sequence_tagger_model.py:925, in SequenceTagger.load(cls, model_path)
921 @classmethod
922 def load(cls, model_path: Union[str, Path, Dict[str, Any]]) -> "SequenceTagger":
923 from typing import cast
--> 925 return cast("SequenceTagger", super().load(model_path=model_path))
File ~\AppData\Local\anaconda3\lib\site-packages\flair\nn\model.py:564, in Classifier.load(cls, model_path)
560 @classmethod
561 def load(cls, model_path: Union[str, Path, Dict[str, Any]]) -> "Classifier":
562 from typing import cast
--> 564 return cast("Classifier", super().load(model_path=model_path))
File ~\AppData\Local\anaconda3\lib\site-packages\flair\nn\model.py:190, in Model.load(cls, model_path)
188 if not isinstance(model_path, dict):
189 model_file = cls._fetch_model(str(model_path))
--> 190 state = load_torch_state(model_file)
191 else:
192 state = model_path
File ~\AppData\Local\anaconda3\lib\site-packages\flair\file_utils.py:384, in load_torch_state(model_file)
380 # load_big_file is a workaround byhttps://github.com/highway11git
381 # to load models on some Mac/Windows setups
382 # see https://github.com/zalandoresearch/flair/issues/351
383 f = load_big_file(model_file)
--> 384 return torch.load(f, map_location="cpu")
File ~\AppData\Local\anaconda3\lib\site-packages\torch\serialization.py:1026, in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
1024 except RuntimeError as e:
1025 raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
-> 1026 return _load(opened_zipfile,
1027 map_location,
1028 pickle_module,
1029 overall_storage=overall_storage,
1030 **pickle_load_args)
1031 if mmap:
1032 raise RuntimeError("mmap can only be used with files saved with "
1033 "`torch.save(_use_new_zipfile_serialization=True), "
1034 "please torch.save your checkpoint with this option in order to use mmap.")
File ~\AppData\Local\anaconda3\lib\site-packages\torch\serialization.py:1438, in _load(zip_file, map_location, pickle_module, pickle_file, overall_storage, **pickle_load_args)
1436 unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
1437 unpickler.persistent_load = persistent_load
-> 1438 result = unpickler.load()
1440 torch._utils._validate_loaded_sparse_tensors()
1441 torch._C._log_api_usage_metadata(
1442 "torch.load.metadata", {"serialization_id": zip_file.serialization_id()}
1443 )
File ~\AppData\Local\anaconda3\lib\site-packages\torch\serialization.py:1431, in _load.<locals>.UnpicklerWrapper.find_class(self, mod_name, name)
1429 pass
1430 mod_name = load_module_mapping.get(mod_name, mod_name)
-> 1431 return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'BPEmbSerializable' on <module 'flair.embeddings.token' from 'C:\\Users\\amaury.fouret\\AppData\\Local\\anaconda3\\lib\\site-packages\\flair\\embeddings\\token.py'>`
```
### Screenshots
_No response_
### Additional Context
Maybe we could start to add the flair version used to train a model in the model's metadata to fix easily this kind of issue in the future?
### Environment
#### Versions:
##### Flair
0.14.0
##### Pytorch
2.2.0+cpu
##### Transformers
4.37.2
#### GPU
False | closed | 2024-07-26T10:08:41Z | 2024-07-30T09:44:10Z | https://github.com/flairNLP/flair/issues/3513 | [
"bug"
] | mauryaland | 1 |
python-security/pyt | flask | 68 | Add pre-commit hooks, whatever you like | closed | 2017-11-12T03:42:12Z | 2018-03-23T17:38:14Z | https://github.com/python-security/pyt/issues/68 | [
"enhancement",
"good first issue"
] | KevinHock | 8 | |
marcomusy/vedo | numpy | 982 | how to adjust the font in legend? |
I add legend to my window, and the code is:
```
planes = []
for p in ps:
planes.append(vedo.Plane(...).legend(...))
lb = vedo.LegendBox(planes, font='Bongas')
vedo.show(planes, lb)
```
And the result is:

The font in legend is too small to see. How can I make the fond larger?
Thanks in advanced~~~
| open | 2023-11-22T03:47:10Z | 2023-11-22T13:29:30Z | https://github.com/marcomusy/vedo/issues/982 | [] | zhang-qiang-github | 1 |
seleniumbase/SeleniumBase | web-scraping | 3,547 | Example not working with custom URLs | I am trying to apply the example https://github.com/seleniumbase/SeleniumBase/issues/3535#issuecomment-2667506520 by slightly changing the code to sequentially analyze a list of specific URLs.
This works:
```
from seleniumbase import SB
urls = {
"https://www.repubblica.it/": "h2.entry__title",
"https://www.corriere.it/" : "h4.title-art-hp",
}
with SB(uc=True, incognito=True) as sb:
sb.activate_cdp_mode()
sb.sleep(1)
for url, tag in urls.items():
sb.cdp.open(url)
sb.sleep(1)
print(sb.get_current_url())
print(sb.cdp.assert_element(tag, timeout=20))
```
I mean, it prints the existence of `tag` for each URL proposed.
This doesn't:
```
from seleniumbase import SB
urls = [
"https://ieeexplore.ieee.org/xpl/issues?punumber=4609443&isnumber=10766875",
"https://ieeexplore.ieee.org/xpl/issues?punumber=6287639&isnumber=10380310",
]
with SB(uc=True, incognito=True) as sb:
sb.activate_cdp_mode()
sb.sleep(1)
for url in urls:
sb.cdp.open(url)
sb.sleep(1)
print(sb.get_current_url())
print(sb.cdp.assert_element("div.issue-details-past-tabs.year", timeout=20))
```
I mean, it prints the existence of the tag only the first time, while the second iteration results in a `Exception: Element {div.issue-details-past-tabs.year} was not found!` error, despite the URL is correctly loaded within the browser.
I don't understand why. | closed | 2025-02-20T15:08:40Z | 2025-02-20T17:24:52Z | https://github.com/seleniumbase/SeleniumBase/issues/3547 | [
"invalid usage",
"UC Mode / CDP Mode"
] | matteocacciola | 1 |
jupyter-book/jupyter-book | jupyter | 1,715 | Remove any sidebar content from the generated PDF | ### Describe the bug
**context**
When I do create a standalone PDF I unfortunately still see the sidebar left. I used some of the in other issues recommended css to get rid of it but had no success.

**expectation**
I expected the three buttons on the right to be gone. They are also not functional.
```console
$ jupyter-book build jupyter-book/ --builder pdfhtml
```
### Reproduce the bug
1. `git clone https://github.com/theislab/extended-single-cell-best-practices`
2. `cd extended-single-cell-best-practices`
3. `make pdf`
### List your environment
1. Python 3.9
2. Arch Linux
3. jupyter-book 0.12.3 | open | 2022-04-27T13:08:07Z | 2022-04-29T03:34:46Z | https://github.com/jupyter-book/jupyter-book/issues/1715 | [
"bug",
":label: pdfhtml"
] | Zethson | 1 |
piccolo-orm/piccolo | fastapi | 171 | Insert & get | Is there any way to do an INSERT and subsequent SELECT of the inserted object w/o a second SELECT query? It look like INSERT returns the PK, but there's no way to add additional columns, change the cardinality, etc. | closed | 2021-08-20T05:40:38Z | 2022-07-18T22:48:58Z | https://github.com/piccolo-orm/piccolo/issues/171 | [
"enhancement"
] | adriangb | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,811 | Float, Numeric primary key column has autoincrement set to true | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/11810
<div type='discussions-op-text'>
<sup>Originally posted by **bgiebl** August 30, 2024</sup>
## Summary
sqlalchemy compiles a primary key FLOAT column to SERIAL by using a postgres database.
This behaviour happens only when there are no foreign key relationships on the column defined and there is no database default defined.
I think this behaviour is a bug and i can propose a fix for it, but i am posting it here to be sure this behaviour is not intended.
## Recreating the problem
```
from sqlalchemy import create_engine, Table, Column, String, MetaData, Float
from sqlalchemy.schema import CreateTable
engine = create_engine("postgresql://scott:tiger@127.0.0.1:5432/test")
metadata = MetaData()
user_table = Table(
'user', metadata,
Column('id', Float, primary_key=True),
Column('name', String)
)
sql = str(CreateTable(user_table).compile(engine))
print(sql)
```
### Output
```
CREATE TABLE "user" (
id SERIAL NOT NULL,
name VARCHAR,
PRIMARY KEY (id)
)
```
### Expected Behaviour
```
CREATE TABLE "user" (
id FLOAT NOT NULL,
name VARCHAR,
PRIMARY KEY (id)
)
```
## Proposed fix
The problem seems to happen with postgres since in this line
https://github.com/sqlalchemy/sqlalchemy/blob/22cbc7dcb48c946dda66704797665289965eb22e/lib/sqlalchemy/dialects/postgresql/base.py#L2164
there is checked if the primary key column is also an autoincrement column.
So i think it is not a problem by postgres but rather by `_autoincrement_column`.
In the documentation https://docs.sqlalchemy.org/en/20/core/metadata.html#sqlalchemy.schema.Column.params.autoincrement there is written
> Set up “auto increment” semantics for an integer primary key column with no foreign key dependencies
which indicates that this should only happen with INTEGER and not FLOAT datatypes.
So why does a FLOAT column getting interpreted as an autoincrement column?
It can be traced back to `PrimaryKeyConstraint._autoincrement_column`
https://github.com/sqlalchemy/sqlalchemy/blob/22cbc7dcb48c946dda66704797665289965eb22e/lib/sqlalchemy/sql/schema.py#L5092-L5097
where it is checked if the data type of the column has no type affinity to INTERGERTYPE and **NUMERICTYPE**.
FLOAT has a type affinity to NUMERICTYPE so the whole check equals to False.
I know that there are some SQL dialects which support autoincrement on FLOAT columns but this is not the case for Postgres.
So depending on the source of the problem there are two possible fixtures:
1. As defined in the documentation, autoincrement can only be applied to INTEGER columns:
The fix would be to delete line 5096 in schema.py.
2. Autoincrement can be applied to FLOAT columns in some dialects but not in Postgres:
The fix would be to add an additional type check in `PGDDLCompiler.get_column_specification`
for a type affinity of the column to INTEGERTYPE.
If you agree that this is unintended behaviour and a bug, i can provide a pull request with one of the fixes.
</div> | closed | 2024-08-30T16:15:14Z | 2024-11-12T18:46:18Z | https://github.com/sqlalchemy/sqlalchemy/issues/11811 | [
"bug",
"schema",
"datatypes"
] | CaselIT | 7 |
pallets/quart | asyncio | 203 | send_file taking bytes to send as-is | Simple enhancement request so one does not need to wrap in `io.Bytes()`... (_as I keep forgetting)_
As per https://quart.palletsprojects.com/en/latest/reference/source/quart.helpers.html#quart.helpers.send_file, the current acceptance of `bytes` as valid type for `filename_io` is misleading as a filename would always be string, no ?
It would be best to repurpose the `bytes` input as the body to send in the response (wrapping in `io.Bytes()` behind the scenes) | closed | 2022-10-27T15:44:03Z | 2023-10-01T00:20:46Z | https://github.com/pallets/quart/issues/203 | [] | yan-hic | 1 |
gradio-app/gradio | deep-learning | 10,320 | Option "editable" in gr.Chatbot could only work for the first message in a group of consecutive messages. | ### Describe the bug
Option "editable" in gr.Chatbot could only work for the first message in a group of consecutive messages.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
examples = [
{"role": "user", "content": "User message 1."},
{"role": "user", "content": "User message 2."},
{
"role": "assistant",
"content": "This is a plain text.",
},
]
with gr.Blocks() as demo:
chatbot = gr.Chatbot(
type="messages",
editable="all",
group_consecutive_messages=False,
)
demo.load(lambda: examples, None, chatbot)
demo.launch(server_name="0.0.0.0", server_port=19011)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio==5.10.0
Python==3.10.14
```
### Severity
Blocking usage of gradio | open | 2025-01-09T03:44:13Z | 2025-03-22T11:57:33Z | https://github.com/gradio-app/gradio/issues/10320 | [
"bug"
] | tankgit | 1 |
aio-libs/aiopg | sqlalchemy | 3 | Add docstrings to public functions | closed | 2014-04-04T19:05:25Z | 2014-06-16T16:56:49Z | https://github.com/aio-libs/aiopg/issues/3 | [] | asvetlov | 1 | |
biolab/orange3 | numpy | 6,646 | Add GESD to outliers | Please add support for Generalized ESD, a robust and powerful unsupervised algorithm for identification of outliers. This algorithm is easy to understand but seemingly underused because few tools make it readily available. | closed | 2023-11-20T15:49:56Z | 2023-12-12T23:15:25Z | https://github.com/biolab/orange3/issues/6646 | [] | belg4mit | 2 |
davidsandberg/facenet | tensorflow | 578 | Validate_on_lfw result differ from training step validate lfw | Hi, David:
as title, I'm interesting to your topic and try to understand it, but I got a problem about validation on lfw, when I run train_softmax.py, and then fit (short name: model X) at validation on lfw stage, the validation result is as below:
train_softmax.py
Accuracy: 0.963+-0.007
Validation rate: 0.80833+-0.03059 @ FAR=0.00133
as later I using the (short name: model X) to run the below script:
validate_on_lfw.py:
Accuracy: 0.961+-0.007
Validation rate: 0.73100+-0.03059 @ FAR=0.00133
I have no idea why using same model to run validation on lfw will got different result, do you have some tips can teach me or other method can share me? many thanks for your helping.
dataset: CASIA-WebFace
model :nn4_small2_v1
BR,
Joseph | open | 2017-12-11T02:55:02Z | 2018-01-17T03:21:27Z | https://github.com/davidsandberg/facenet/issues/578 | [] | akimo12345 | 1 |
OpenBB-finance/OpenBB | machine-learning | 6,175 | [Bug] Cannot login on VPS using Linux Terminal | **Describe the bug**
After installing Openbb Terminal in a remote VPS running Linux, I cannot login.
**To Reproduce**
Installed a fresh copy of openbb and worked properly.
Started the terminal and worked properly.
Went to accounts/login and at this point I believe the terminal is attempting to open a browser window but that is not possible on the Linux terminal VPS.
How am I to login using credentials or the personal access token I have previously generated?
**Screenshots**
https://i.imgur.com/zHYDGt8.png

**Additional context**
Please provide clear and easy to understand steps on how to login as I am new to both Linux and Openbb.
**Later Edit:**
After running /accounts/login multiple times, this error appeared three times in about 10 tries. I do not know if they are connected.
 | open | 2024-03-07T21:47:15Z | 2024-03-07T22:17:40Z | https://github.com/OpenBB-finance/OpenBB/issues/6175 | [] | HunterXClarke | 0 |
mljar/mercury | data-visualization | 248 | Network Error when deployed on Hugging Face | I recently factory reset a build using mercury in hugging face, and I am now getting a network error. The original deployment of the build was roughly 2 months ago, and was working fine until I did a factory reset.
Network Error
Please check if you have internet connection and server is running. In case of problems, please contact administrator.
There is no errors, and says the space is running. | closed | 2023-04-12T08:24:03Z | 2023-04-21T10:56:46Z | https://github.com/mljar/mercury/issues/248 | [
"bug"
] | stametsm | 6 |
serengil/deepface | machine-learning | 743 | Not An Issue Thread - I made a tutorial for how to install and utilize DeepFace | I can make a pull request as well if you accept to add it as a tutorial to readme file. I would appreciate that very much
Thank you so much for this amazing Library
[**How To Find Best Stable Diffusion Generated Images By Using DeepFace AI - DreamBooth / LoRA Training**](https://youtu.be/343I11mhnXs)
[!](https://youtu.be/343I11mhnXs)
| closed | 2023-05-04T23:08:41Z | 2023-05-05T07:47:27Z | https://github.com/serengil/deepface/issues/743 | [
"question"
] | FurkanGozukara | 1 |
pytorch/pytorch | deep-learning | 149,468 | torch.library.opcheck doesn't check strides for CPU Tensors | Repro:
```py
import torch
from torchvision.transforms.functional import to_pil_image, pil_to_tensor
import PIL
def crop(pic, box):
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
return pil_to_tensor(cropped_img).to(pic.device) / 255.
img = torch.ones(3, 64, 64)
img *= torch.linspace(0, 1, steps=64) * torch.linspace(0, 1, steps=64).unsqueeze(-1)
cropped_img = crop(img, (10, 10, 50, 50))
def f(img):
return crop(img, (10, 10, 50, 50))
cropped_img = f(img)
print(img.shape, img.stride())
print(cropped_img.shape, cropped_img.stride())
from typing import Sequence
@torch.library.custom_op("mylib::crop", mutates_args=())
def crop(pic: torch.Tensor, box: Sequence[int]) -> torch.Tensor:
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
result = (pil_to_tensor(cropped_img) / 255.).to(pic.device, pic.dtype)
return result
@crop.register_fake
def _(pic, box):
channels = pic.shape[0]
x0, y0, x1, y1 = box
# result = pic.new_empty(y1 - y0, x1 - x0, channels).permute(2, 0, 1)
result = pic.new_empty(channels, y1 - y0, x1 - x0)
return result
result = torch.library.opcheck(crop, (img, (10, 10, 50, 50)))
print(result)
```
cc @ezyang @gchanan @kadeng @msaroufim | open | 2025-03-19T01:32:23Z | 2025-03-19T01:44:15Z | https://github.com/pytorch/pytorch/issues/149468 | [
"high priority",
"triage review"
] | zou3519 | 1 |
harry0703/MoneyPrinterTurbo | automation | 60 | 报错 | <img width="739" alt="企业微信截图_1711366472407" src="https://github.com/harry0703/MoneyPrinterTurbo/assets/88197887/a5d910ee-9828-47a7-a4a6-400da3f94cdd">
我已经设置好相关参数,但是在AI生成文案,或者文案生成关键词的时候还是报openai的错误:
File "C:\Users\MaCheng\.conda\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 988, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'auth failed', 'type': ''}}
| closed | 2024-03-25T11:40:41Z | 2024-03-26T07:35:14Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/60 | [] | mmacheng | 3 |
plotly/dash-table | plotly | 583 | virtualization not working with editing? | Promoting this from the community forum. I haven't tried reproducing, but it looks like a good report: https://community.plot.ly/t/dash-data-table-virtualization-completely-broken-with-an-editable-table/28565 | closed | 2019-09-12T17:56:02Z | 2019-09-13T18:45:51Z | https://github.com/plotly/dash-table/issues/583 | [
"dash-type-bug",
"regression",
"size: 1"
] | chriddyp | 0 |
serpapi/google-search-results-python | web-scraping | 78 | Sort google news results by date [engine = google_news ] | Could we please have the equivalent of `"tbs":"sbd:1" ` in the `engine = google_news` to show latest news first?
| closed | 2025-03-15T16:41:25Z | 2025-03-16T21:46:13Z | https://github.com/serpapi/google-search-results-python/issues/78 | [] | mohamedScikitLearn | 1 |
pyjanitor-devs/pyjanitor | pandas | 1,181 | [TST] Compat with macos and window, FailedHealthCheck | This PR #1143 is almost done.
But it still left one problem.
HealthCheck always failed on macos and window system like following.
https://github.com/pyjanitor-devs/pyjanitor/actions/runs/3281183133/jobs/5402869041#step:5:2147
```python
_____________________________ test_df_key_hashable _____________________________
[gw2] darwin -- Python 3.10.6 /Users/runner/miniconda3/envs/test/bin/python
@given(df=df_strategy())
> def test_df_key_hashable(df):
E hypothesis.errors.FailedHealthCheck: Data generation is extremely slow: Only produced 6 valid examples in 1.08 seconds (0 invalid ones and 4 exceeded maximum size). Try decreasing size of the data you're generating (with e.g. max_size or max_leaves parameters).
E See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.too_slow to the suppress_health_check settings for this test.
tests/functions/test_expand_grid.py:40: FailedHealthCheck
```
I thought wether we could use a small size date like `max_size=10` or a small one?
https://github.com/pyjanitor-devs/pyjanitor/blob/f0c79066c3faa0762626339054692401a7b8db58/janitor/testing_utils/strategies.py#L21-L43 | closed | 2022-10-20T06:34:14Z | 2022-10-24T19:00:57Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1181 | [] | Zeroto521 | 3 |
521xueweihan/HelloGitHub | python | 2,592 | 能否找一些类似阿里Aone的CI/DI(发布流水线)项目 | closed | 2023-08-14T09:06:12Z | 2023-08-20T02:31:03Z | https://github.com/521xueweihan/HelloGitHub/issues/2592 | [] | ZhangLe1993 | 0 | |
InstaPy/InstaPy | automation | 6,382 | Unexpected Keyword argument "instapyfollowed" |
## Expected Behavior
Wanted to unfollow the after following
## Current Behavior
Traceback (most recent call last):
File "/home/cliff/InstaPy/unfollow.py", line 35, in <module>
session.unfollow_users(amount=100, InstapyFollowed=(True, "nonfollowers"),
TypeError: unfollow_users() **got an unexpected keyword argument 'InstapyFollowed'**
any help? | open | 2021-10-24T19:56:34Z | 2021-12-04T00:08:19Z | https://github.com/InstaPy/InstaPy/issues/6382 | [] | saintredog | 3 |
RomelTorres/alpha_vantage | pandas | 365 | PHP Stock Ticker on WP Not Fetching Crypto Data | `<?php
$apiKey = 'API Key';
function callStockdataAPI($symbol, $apiKey)
{
$endpoint = 'https://www.alphavantage.co/query';
$params = [
'function' => 'TIME_SERIES_DAILY_ADJUSTED',
'symbol' => $symbol,
'apikey' => $apiKey
];
$url = $endpoint . '?' . http_build_query($params);
try {
$response = file_get_contents($url);
if ($response === false) {
return "API call failed for {$symbol}";
}
$data = json_decode($response, true);
if (json_last_error() !== JSON_ERROR_NONE) {
return "JSON decoding error for {$symbol}";
}
if (!isset($data['Time Series (Daily)'])) {
return "No data available for {$symbol}";
}
$latestData = reset($data['Time Series (Daily)']);
$stockPrice = $latestData['5. adjusted close'];
return "{$symbol}: \${$stockPrice}";
} catch (\Throwable $e) {
return "Error fetching stock price for {$symbol}: " . $e->getMessage();
}
}
function callCryptodataAPI($symbol, $apiKey)
{
$endpoint = 'https://www.alphavantage.co/query';
$params = [
'function' => 'DIGITAL_CURRENCY_DAILY',
'symbol' => $symbol,
'market' => 'USD',
'apikey' => $apiKey
];
$url = $endpoint . '?' . http_build_query($params);
try {
$response = file_get_contents($url);
if ($response === false) {
return "API call failed for {$symbol}";
}
$data = json_decode($response, true);
if (json_last_error() !== JSON_ERROR_NONE) {
return "JSON decoding error for {$symbol}";
}
if (!isset($data['Time Series (Digital Currency Daily)'])) {
return "No data available for {$symbol}";
}
$latestData = reset($data['Time Series (Digital Currency Daily)']);
$price = $latestData['4a. close (USD)'];
return "{$symbol}: \${$price}";
} catch (\Throwable $e) {
return "Error fetching cryptocurrency data for {$symbol}: " . $e->getMessage();
}
}
$data = [];
$symbols = [
'GOOG', 'AAPL', 'TSLA', 'META', 'AMZN', 'NFLX', 'GME', 'AMC', 'NOK', 'TSM', 'BLK',
'BTC', 'ETH', 'XRP'
];
foreach ($symbols as $symbol) {
if (strpos($symbol, 'BTC') !== false || strpos($symbol, 'ETH') !== false || strpos($symbol, 'XRP') !== false) {
$data[] = callCryptodataAPI($symbol, $apiKey);
} else {
$data[] = callStockdataAPI($symbol, $apiKey);
}
}
if (!empty($data)) {
$stockTickerText = implode(" ", $data);
} else {
$stockTickerText = "No data available";
}
?>
<!DOCTYPE html>
<html>
<head>
<style>
/* CSS for the ticker marquee */
.ticker {
background-color: black;
color: lime;
font-family: "Courier New", monospace;
font-size: 16px;
overflow: hidden;
white-space: nowrap;
}
.ticker span {
display: inline-block;
padding-left: 100%;
animation: marquee 20s linear infinite;
}
@keyframes marquee {
0% {
transform: translateX(0);
}
100% {
transform: translateX(-100%);
}
}
</style>
</head>
<body>
<div class="ticker">
<span id="stockTicker"><?php echo $stockTickerText; ?></span>
</div>
</body>
</html>
` | closed | 2024-05-18T18:47:43Z | 2024-08-14T10:48:26Z | https://github.com/RomelTorres/alpha_vantage/issues/365 | [] | f100001e | 3 |
lepture/authlib | django | 681 | Issue using Python 3.13 | **Describe the bug**
A clear and concise description of what the bug is.
**Error Stacks**
```
../../../.cache/pypoetry/virtualenvs/rp-python-sdk-V_ZVmfOb-py3.13/lib/python3.13/site-packages/authlib/jose/__init__.py:14: in <module>
from .rfc7517 import Key, KeySet, JsonWebKey
../../../.cache/pypoetry/virtualenvs/rp-python-sdk-V_ZVmfOb-py3.13/lib/python3.13/site-packages/authlib/jose/rfc7517/__init__.py:10: in <module>
from ._cryptography_key import load_pem_key
../../../.cache/pypoetry/virtualenvs/rp-python-sdk-V_ZVmfOb-py3.13/lib/python3.13/site-packages/authlib/jose/rfc7517/_cryptography_key.py:1: in <module>
from cryptography.x509 import load_pem_x509_certificate
../../../.cache/pypoetry/virtualenvs/rp-python-sdk-V_ZVmfOb-py3.13/lib/python3.13/site-packages/cryptography/x509/__init__.py:7: in <module>
from cryptography.x509 import certificate_transparency, verification
../../../.cache/pypoetry/virtualenvs/rp-python-sdk-V_ZVmfOb-py3.13/lib/python3.13/site-packages/cryptography/x509/certificate_transparency.py:11: in <module>
from cryptography.hazmat.bindings._rust import x509 as rust_x509
E pyo3_runtime.PanicException: Python API call failed
```
**To Reproduce**
The only thing I'm doing is importing a library:
```
from authlib.jose import JsonWebSignature
```
**Expected behavior**
It shouldn't fail (this code works in 3.12)
**Environment:**
- OS: Ubuntu
- Python Version: 3.13
- Authlib Version: 1.3.2
| closed | 2024-10-11T05:25:05Z | 2024-10-18T05:18:45Z | https://github.com/lepture/authlib/issues/681 | [
"bug"
] | erikpragt-connectid | 6 |
nltk/nltk | nlp | 3,140 | not downloading | pultk and brown nltk not getting downloaded.[nltk_data] Error loading brown: <urlopen error [WinError 10060] A
[nltk_data] connection attempt failed because the connected party
[nltk_data] did not properly respond after a period of time, or
[nltk_data] established connection failed because connected host
[nltk_data] has failed to respond> | open | 2023-04-11T20:03:46Z | 2024-04-05T13:21:22Z | https://github.com/nltk/nltk/issues/3140 | [] | Riya-1403 | 9 |
nikitastupin/clairvoyance | graphql | 11 | Verify that each word in wordlist conforms to name regex | According to https://spec.graphql.org/June2018/#sec-Names names in GraphQL must match following regex:
```
[_A-Za-z][_0-9A-Za-z]*
```
We can skip non-matching names and add flag to disable this verification. | closed | 2020-11-08T13:33:56Z | 2023-03-18T12:29:35Z | https://github.com/nikitastupin/clairvoyance/issues/11 | [
"enhancement"
] | nikitastupin | 0 |
Miserlou/Zappa | flask | 1,283 | Unable to import prebuild script module | I'm trying to run a prebuild script but getting an error saying that it can't import it. It only happens if I'm importing my own module
## Context
I have the following script:
```
import boto3
from configuration.settings import BUCKET_PREFIX
def create_buckets():
s3 = boto3.client('s3')
image_upload_bucket = f'{BUCKET_PREFIX}-image-upload'
whatsapp_upload_bucket = f'{BUCKET_PREFIX}-whatsapp-upload'
s3.create_bucket(ACL='private', Bucket=image_upload_bucket)
s3.create_bucket(ACL='private', Bucket=whatsapp_upload_bucket)
```
located under `utils.general_utils` when running `zappa update`, I'm getting Failed to import prebuild script module: "utils.general_utils", only when removing `from configuration.settings import BUCKET_PREFIX` the script `zappa update` works.
`configuration.settings` is a local module.
## Expected Behavior
Should succeed running the prebuild script
## Actual Behavior
`zappa update` fails with an error `Unable to import prebuild script module`
## Steps to Reproduce
See example code
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: maxOS High Sierra
* The output of `pip freeze`:
```
argcomplete==1.9.2
base58==0.2.4
blinker==1.4
boto3==1.4.8
botocore==1.8.5
cachetools==2.0.1
certifi==2017.11.5
cfn-flip==0.2.5
chardet==3.0.4
click==6.7
dill==0.2.7.1
docutils==0.14
durationpy==0.5
firebase-admin==2.5.0
Flask==0.12.2
future==0.16.0
google-api-core==0.1.1
google-auth==1.2.1
google-cloud-core==0.28.0
google-cloud-firestore==0.28.0
google-cloud-storage==1.6.0
google-gax==0.15.16
google-resumable-media==0.3.1
googleapis-common-protos==1.5.3
grpcio==1.7.0
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
MarkupSafe==1.0
placebo==0.8.1
ply==3.8
protobuf==3.5.0.post1
pyasn1==0.4.2
pyasn1-modules==0.2.1
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2017.3
PyYAML==3.12
raven==6.3.0
requests==2.18.4
rsa==3.4.2
s3transfer==0.1.12
six==1.11.0
toml==0.9.3
tqdm==4.19.1
troposphere==2.1.1
Unidecode==0.4.21
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Your `zappa_settings.py`:
```
{
"production": {
"app_function": "authentication_service.__init__.app",
"aws_region": "us-east-1",
"project_name": "s3-authentication-service",
"runtime": "python3.6",
"manage_roles": false,
"role_name": "api_gw_role",
"prebuild_script": "utils.general_utils.create_buckets",
"events": [
{
"function": "events.s3_events.process_image_uploaded",
"event_source": {
"arn": "arn:aws:s3:::mage-upload",
"events": [
"s3:ObjectCreated:*"
]
}
}
]
}
}
``` | closed | 2017-12-06T12:20:58Z | 2017-12-07T10:47:04Z | https://github.com/Miserlou/Zappa/issues/1283 | [] | efimerdlerkravitz | 1 |
pallets-eco/flask-sqlalchemy | flask | 578 | Model class for make_declarative_base() | I've updated from 2.1 and now this function requires model parameter. I've tried to give it flask_sqlalchemy.model class and recieve this
> TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
Tried to give it my defined class with query method:
```
> class Repository(object):
entity_class = None
def __init__(self, session=None):
self.session = session
@property
def query(self):
return self.session.query(self._get_entity_class())
```
Then I recieve this:
> TypeError: make_declarative_base() takes at least 2 arguments (2 given)
Whaaat?! Can you show usage example? | closed | 2017-12-28T15:22:00Z | 2020-12-05T20:46:34Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/578 | [] | ProstoMaxim | 0 |
microsoft/JARVIS | deep-learning | 211 | models_server.py error | python models_server.py --config configs/config.gradio.yaml
Fetching 27 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 27/27 [00:00<00:00, 223365.30it/s]
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
WARNING:datasets.builder:Found cached dataset cmu-arctic-xvectors (/home/cv_lifei/.cache/huggingface/datasets/cmu-arctic-xvectors/default/0.0.1/a62fea1f9415e240301ea0042ffad2a3aadf4d1caa7f9a8d9512d631723e781f)
Some weights of DPTForDepthEstimation were not initialized from the model checkpoint at models/Intel/dpt-large and are newly initialized: ['neck.fusion_stage.layers.0.residual_layer1.convolution1.bias', 'neck.fusion_stage.layers.0.residual_layer1.convolution2.bias', 'neck.fusion_stage.layers.0.residual_layer1.convolution1.weight', 'neck.fusion_stage.layers.0.residual_layer1.convolution2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Could not load the custom kernel for multi-scale deformable attention: [Errno 2] No such file or directory: '/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/transformers/models/deformable_detr/custom_kernel/vision.cpp'
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
[ ready ] 52.25122785568237s | open | 2023-06-14T04:33:15Z | 2024-02-15T12:21:23Z | https://github.com/microsoft/JARVIS/issues/211 | [] | lovelucymuch | 1 |
gradio-app/gradio | python | 10,288 | chatbot.py:242: UserWarning: You have not specified a value for the `type` parameter. | ### Describe the bug
when I start the f5-tts_infer-gradio I get the following error:
E:\miniconda3\envs\f5-tts\lib\site-packages\gradio\components\chatbot.py:242: UserWarning: You have not specified a value for the `type` parameter. Defaulting to the 'tuples' format for chatbot messages, but this is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style dictionaries with 'role' and 'content' keys.
warnings.warn(
Starting app...
信息: 用提供的模式无法找到文件。
It doesn't work.How do I fix this?
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
windows10
```
### Severity
I can work around it | closed | 2025-01-05T13:10:33Z | 2025-01-06T10:59:43Z | https://github.com/gradio-app/gradio/issues/10288 | [
"bug"
] | FreeThinker12 | 2 |
apache/airflow | automation | 47,488 | OpenLineage can silently lose Snowflake query_ids and can't support multiple query_ids | ### Apache Airflow Provider(s)
openlineage
### Versions of Apache Airflow Providers
latest
### Apache Airflow version
2.X
### Operating System
macos
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using `SqlExecuteQueryOperator` with Snowflake, and running a query with multiple statements in it, OpenLineage will only include first `query_id` in `ExternalQueryRunFacet`.
This is problematic, as users don't have full control on how the statements are executed (when query consists of multiple statements and `split_statements=False` operator throws an error `snowflake.connector.errors.ProgrammingError: 000008 (0A000): 01bad84f-0000-4392-0000-3d95000110ce: Actual statement count 3 did not match the desired statement count 1.`). The only solution for users to retrieve all query_ids in OL events is to set `split_statements=False` and make sure each task runs a single statement, which is rarely a case.
In BQ, similar problem is solved by ["parent_query_job"](https://github.com/apache/airflow/blob/ab3a1869c57def3ee74a925709cece4c7e07b891/providers/google/src/airflow/providers/google/cloud/openlineage/mixins.py#L109) executing each statement within a "child_query_job" with a link to the parent job, so that it's easy to access all ids later on. I couldn't find a similar mechanism in Snowflake.
### What you think should happen instead
Ideally, from within a single task (SqlExecuteQueryOperator) we would emit a separate OL event for each statement run, containing parentRunFacet pointing to the Airflow task. This may however take some time to implement properly and may? (or not) need some adjustments from the consumers?
As a partial solution, we could extend `ExternalQueryRunFacet` with a new property that accepts multiple `externalQueryIds`. This requires some discussion from OL community as how it fits to the spec.
Another small note, right now we are already sending the entire sql query (with all the statements) in `SQLJobFacet`, regardless if they execute as separate "queries" or not. So it would probably need adjustment as well.
### How to reproduce
Run a sample query like:
```
USE WAREHOUSE COMPUTE_WH;
CREATE OR REPLACE TABLE test.public.result AS SELECT * FROM snowflake_sample_data.tpch_sf1.customer;
```
You can see in Snowflake that this resulted in two queries being run, with two separate query_ids and only first one is included in OpenLineage event.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-07T09:43:27Z | 2025-03-07T09:51:04Z | https://github.com/apache/airflow/issues/47488 | [
"kind:bug",
"area:providers",
"needs-triage",
"provider:openlineage"
] | kacpermuda | 0 |
django-oscar/django-oscar | django | 4,162 | Code too complex; So much unlogical | Hello i like this product very much but there are soooo much things in code which are not good, unlogical.
1. I have till yet no idea why you map url in apps.py which makes muuuuch more work then to use normal url.py. I see no advantage
2. Why you use not normal models.py and define models normal. Why you use a abstract class?
3. There are sooo much unlogical code. For example checkout/shipping-method/ there you post choose method. Why you not get this with get parameter? Strategy and Prices in partner this is a absolut blackhole why this code so is insert.
For a beginner which have no experience with your product he has no chance to understand this code.
My wish was that you in next release you don't insert more feature. Please clean up code make it simpler and more readable..
Last thing you don't insert a complete integration of shipping methods and payment gateway in your product and link only to third level packages. To implement checkout process complete is not a easy task. This is unnecessary effort. Please integrate a standard paypal integration which works out of the box because i think near 100% of all customer want that paypal is activate...
P.S. i know several developer how tells me that i should forget django oscar and take another webshop framework which are easier to customize | closed | 2023-09-02T10:28:40Z | 2023-09-06T06:02:29Z | https://github.com/django-oscar/django-oscar/issues/4162 | [] | Bastilla123 | 1 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 223 | how to create a class ORM relate with postgres table | Usually, create a ORM class as below:
```python
from sqlalchemy import Column
from clickhouse_sqlalchemy.ext.declarative import declarative_base
ChBase = declarative_base()
class TbFile(ChBase):
__tablename__ = 'tb_file'
__table_args__ = {'comment': 'File Info Table V2.0'}
created = Column(types.DateTime, server_default=F.now())
filename = Column(types.String, primary_key=True)
id = Column(types.UInt32)
__table_args__ = (
engines.MergeTree(order_by='id', primary_key='id'),
)
```
but if I want to use clickhouse to interact with a relational database table(for example postgres table) how to create it?
From this webpage, we can get the raw sql mehod to do it.
https://clickhouse.com/docs/zh/engines/database-engines/postgresql/
CREATE DATABASE test_database
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]);
SELECT * FROM test_database.test_table;
INSERT INTO test_database.test_table VALUES (3,4); | open | 2022-12-06T07:28:39Z | 2022-12-06T07:28:39Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/223 | [] | flyly0755 | 0 |
sebp/scikit-survival | scikit-learn | 92 | Random forest logrank splitting decision rule | Hello!
Is there any documentation on this topic? Why to pick logrank to measure the quality of a split?
Thanks!
Camila F | closed | 2020-02-14T15:26:22Z | 2020-02-15T12:03:24Z | https://github.com/sebp/scikit-survival/issues/92 | [] | camferna | 1 |
ResidentMario/geoplot | matplotlib | 24 | Backport to Python versions pre-3.5 | closed | 2017-01-30T23:54:09Z | 2018-05-10T16:59:16Z | https://github.com/ResidentMario/geoplot/issues/24 | [
"enhancement"
] | ResidentMario | 2 | |
pbugnion/gmaps | jupyter | 226 | Drawing layer should work when exported to HTML | At the moment, the drawing layer only works in the notebook. We should see if it's possible to move more of the interaction client-side, so that the map still works when exported to HTML.
The main source of problems will be widget instantiation: when a feature is added to the map, this sends a callback to the Python side, which instantiates an appropriate widget. We need to replace this with an instantiation of the widget client-side.
Originally opened in issue #223 . | open | 2018-03-12T07:22:30Z | 2018-03-12T07:22:30Z | https://github.com/pbugnion/gmaps/issues/226 | [] | pbugnion | 0 |
vastsa/FileCodeBox | fastapi | 60 | arm平台docker镜像搭建。 | 请问是否可以同步创建arm平台的docker镜像呢? | closed | 2023-03-08T02:29:56Z | 2023-08-15T10:00:27Z | https://github.com/vastsa/FileCodeBox/issues/60 | [] | waifei01 | 9 |
waditu/tushare | pandas | 928 | TUSHARE PRO 股票历史数据不全 | pro= ts.pro_api()
pf=ts.pro_bar(pro_api=pro, ts_code='000544.SZ', adj='qfq', start_date='19900101', end_date='20191231')
pf = pro.query('daily', ts_code='000544', start_date='19900101', end_date='20191231')
例如股票代码:000544.SZ 中原环保 | 19931208,000035.SZ 中国天楹 | 19940408。发现000544.SZ遗漏20011031前的股票历史数据;发现000035.SZ遗漏19971223前的股票历史数据。
TUSHARE PRO账号:13907359903。
邮箱 13907359903@139.COM。
| closed | 2019-02-20T02:06:24Z | 2019-02-22T00:48:08Z | https://github.com/waditu/tushare/issues/928 | [] | stillpassion | 2 |
yt-dlp/yt-dlp | python | 12,121 | Unable to Record CBC | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA
### Provide a description that is worded well enough to be understood
Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)"));
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-F', '-vU', 'https://gem.cbc.ca/murdoch-mysteries/s18e09', '--username', 'PRIVATE', '--password', 'PRIVATE']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds [164368610] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.16.232854 from yt-dlp/yt-dlp-nightly-builds)
[debug] Using fake IP 99.253.176.209 (CA) as X-Forwarded-For
[debug] Loading cbcgem.claims_token from cache
[gem.cbc.ca] Extracting URL: https://gem.cbc.ca/murdoch-mysteries/s18e09
[gem.cbc.ca] murdoch-mysteries/s18e09: Downloading JSON metadata
ERROR: [gem.cbc.ca] murdoch-mysteries/s18e09: murdoch-mysteries/s18e09: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\cbc.py", line 638, in _real_extract
File "yt_dlp\extractor\common.py", line 1152, in download_content
File "yt_dlp\extractor\common.py", line 1119, in download_handle
File "yt_dlp\extractor\common.py", line 1107, in parse
File "yt_dlp\extractor\common.py", line 1094, in _parse_json
File "yt_dlp\extractor\common.py", line 1077, in __print_error
File "yt_dlp\utils\_utils.py", line 565, in decode
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 1091, in _parse_json
File "json\__init__.py", line 359, in loads
File "yt_dlp\utils\_utils.py", line 573, in decode
json.decoder.JSONDecodeError: Expecting value in '': line 1 column 1 (char 0)
``` | closed | 2025-01-18T11:00:41Z | 2025-01-18T11:09:31Z | https://github.com/yt-dlp/yt-dlp/issues/12121 | [
"duplicate"
] | markwittl | 1 |
microsoft/hummingbird | scikit-learn | 240 | Add support for Tweedie for LGBM | This will require the addition of a new `post_transform` [here](https://github.com/microsoft/hummingbird/blob/7e5aa66eef94debedb27293dd2c6666946347199/hummingbird/ml/operator_converters/_gbdt_commons.py#L109) and [here](https://github.com/microsoft/hummingbird/blob/master/hummingbird/ml/operator_converters/constants.py), and the detection and extraction of Tweedie parameters from the [lightGBM converter](https://github.com/microsoft/hummingbird/blob/master/hummingbird/ml/operator_converters/sklearn/lightgbm.py). | closed | 2020-08-18T06:13:46Z | 2020-08-19T16:12:43Z | https://github.com/microsoft/hummingbird/issues/240 | [] | interesaaat | 0 |
Neoteroi/BlackSheep | asyncio | 113 | Contributing documentation | Hi @RobertoPrevato and thank you for a cool framework.
Probably for new contributors, would be cool to see **contribution** documentation in place.
- how to contribute
- what type of style we should follow
- how to set up a development environment
- how we should name branches
- ...etc..
Does this make sense? | closed | 2021-04-29T11:16:34Z | 2021-05-03T07:47:15Z | https://github.com/Neoteroi/BlackSheep/issues/113 | [] | myusko | 8 |
flaskbb/flaskbb | flask | 600 | Make FlaskBB's settings (JSON) serializable instead of relying on pickle | I kinda like the approach of indico. It seems like they define the settings as WTF Fields. | open | 2021-09-10T18:15:41Z | 2021-09-10T18:15:41Z | https://github.com/flaskbb/flaskbb/issues/600 | [] | sh4nks | 0 |
SYSTRAN/faster-whisper | deep-learning | 1,170 | sound-effects missing on output for beam-size > 1 | If I transcribe with a beam_size of 1 I seem to get tokens like..
[0.00s -> 5.08s] MUSIC
[16.08s -> 17.84s] speech as text,
[17.84s -> 20.24s] speech as text
[20.24s -> 22.96s] speech as text
[22.96s -> 25.76s] CHEERING AND APPLAUSE
[25.76s -> 28.80s] speech as text
but with beam_sizes > 1 I just get the speech and no 'sound-effects'
ideally I would like the [MUSIC] [LAUGHTER] etc.
Can this be done? | closed | 2024-11-23T14:20:14Z | 2024-12-06T20:03:34Z | https://github.com/SYSTRAN/faster-whisper/issues/1170 | [] | rummyr | 3 |
deepset-ai/haystack | machine-learning | 9,098 | remove `ChatMessage.to_dict` warning | In #9069, we changed the serialization format of `ChatMessage`.
We also added a [warning](https://github.com/deepset-ai/haystack/blob/dae8c7babaf28d2ffab4f2a8dedecd63e2394fb4/haystack/dataclasses/chat_message.py#L341) for users who consume directly the output of `ChatMessage.to_dict` (not in Pipeline serialization).
In a future release (2.14.0?), we should remove this warning. | open | 2025-03-24T09:57:51Z | 2025-03-24T09:57:51Z | https://github.com/deepset-ai/haystack/issues/9098 | [] | anakin87 | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,511 | About generate video | This work is so excellent, but i have some question. How can i get the horse2zebra video as the demo shows. Thank you. | open | 2022-11-27T11:45:15Z | 2022-12-28T05:00:19Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1511 | [] | zhangqian001 | 1 |
gtalarico/django-vue-template | rest-api | 18 | Heroku App Server error on /api/model | When trying to view MYHEROKUURL/api I'm getting a bad request / server error, and I've noticed I get it on your live demo as well. | closed | 2018-12-23T04:17:16Z | 2018-12-26T02:52:45Z | https://github.com/gtalarico/django-vue-template/issues/18 | [] | zzzint | 3 |
skypilot-org/skypilot | data-science | 4,760 | [Roadmap] SkyPilot Roadmap Q1 2025 | Here is the development roadmap for 2025 Q1. Contributions and feedback are welcome! Please feel free to join our [Slack](https://slack.skypilot.co).
This doc will be updated as we make progress.
## Deployment for Teams
- [x] Client-Server rearchitect #4660
- [x] Deployment of fault-tolerant SkyPilot API server for teams #4737
- [x] Multiple Kubernetes clusters support #4586
- [ ] Performance improvement of client server request scheduling #4732
- [ ] Observability for clusters/jobs/services
- [ ] Robustness and UX improvement of the API server
## Managed Jobs
- [x] Support 2000+ in-parallel jobs for managed jobs #4485
- [x] Avoid requirements of cloud credentials for k8s use case #4708
- [ ] Multi-tenant managed jobs and services
- [x] Consolidate jobs/serve controller #4686
- [ ] Job/Service owner
- [ ] UX improvement of large-scale batch inference. Proposoal: #4735
- [ ] Documents for authentication on controller
- [ ] Integration with AI applications
- [x] Large-scale image search: https://blog.skypilot.co/large-scale-vector-database/
- [ ] more blogs to come
## Serving LLMs
- [ ] High-availability of the serve controller on Kubernetes. Prototype: #4564
- [ ] Low-latency load balancing for LLM serving #4362
- [ ] Low-cost and highly available serving with spot instances #4628
- [ ] Observability improvement
- [x] Additional port for metrics #4356
- [ ] Setting up metrics/dashboard
## Clouds
- [ ] Robustness maintanence of existing clouds
- [ ] H100/H200 GPUs
- [ ] AMD GPUs and ARM CPUs #4793 #4863
- [ ] High-performance AI networking setup on clouds **(helps wanted)**
- [ ] Docs for the high-performance networking
- [ ] Automatic catalog fetching for more clouds
- [ ] Open port support for Lambda Cloud
- [ ] Cloud coverage
- [x] Digital Ocean #3832
- [x] Vast AI #4365
- [x] Nebius Cloud #4573
## Developer Productivity
- [x] Buildkite integration for smoke tests #4396
- [ ] Weekly tests #4755
- [ ] Cost reduction for the smoke tests
## Ecosystem Projects
- [ ] AI workloads using SkyPilot
- [ ] Integration with other frameworks
- [ ] Airflow ([Example](https://github.com/skypilot-org/skypilot/tree/master/examples/airflow/training_workflow)) | open | 2025-02-19T20:33:08Z | 2025-03-01T19:23:33Z | https://github.com/skypilot-org/skypilot/issues/4760 | [] | Michaelvll | 0 |
serengil/deepface | machine-learning | 1,100 | KerasTensor Exception when using DeepFace.extract_faces | Hi, I encountered an error while trying to extract faces from an image using the deepface library.
```
faces = DeepFace.extract_faces(
img_path=img_path,
detector_backend = 'retinaface',
enforce_detection=False,
align = True
)
```
I got this error:
```
Exception encountered when calling MaxPooling2D.call().\n\n\x1b[1mArgument `padding` must be either 'valid' or 'same'. Received: padding=VALID\x1b[0m\n\nArguments received by MaxPooling2D.call():\n • args=('<KerasTensor shape=(None, None, None, 64), dtype=float32, sparse=False, name=keras_tensor_5>',)\n • kwargs=<class 'inspect._empty'>
```
It appears the issue is linked to the Invocation of MaxPooling2D.call(). At present it appears the value being passed for the argument `padding` is VALID in uppercase, when the expected value should be lower case.
Would it be possible to change this to lowercase so that the error is fixed? or perhaps could you advise how to resolve this issue? | closed | 2024-03-11T11:16:38Z | 2024-03-11T12:03:27Z | https://github.com/serengil/deepface/issues/1100 | [] | AaronMk44 | 11 |
ultralytics/ultralytics | deep-learning | 19,427 | How to create a costum dataset using the results from an ULTRALYTICS models | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I am currently working on a dataset, and I am generating labels for object detection and segmentation.
First, I use grounding dino for detection and pass the predicted bounding boxes to an Ultralytics SAM model.
I want to train D-FINE, YOLOv11 and YOLOv12 using my customed datasets. So, I am facing the following challenges. How do I create a COCO and an Ultralytics formats annotation files using the outputs from SAM?
Thank,
Sebastian
### Additional
_No response_ | open | 2025-02-25T18:07:49Z | 2025-02-28T01:38:16Z | https://github.com/ultralytics/ultralytics/issues/19427 | [
"question",
"segment",
"detect"
] | SebastianJanampa | 4 |
scikit-hep/awkward | numpy | 2,832 | GPU Tests Failed | The GPU tests failed for commit e3e48744d5a0ef09068996c1c590ca9bcb25492d with the following pytest output:
```
packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_2 _____________________
def test_cuda_awkward_missing_repeat_64_2():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:42:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_3 _____________________
def test_cuda_awkward_missing_repeat_64_3():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 1
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 1
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:58:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_4 _____________________
def test_cuda_awkward_missing_repeat_64_4():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:74:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_5 _____________________
def test_cuda_awkward_missing_repeat_64_5():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 0
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 0
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:90:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_6 _____________________
def test_cuda_awkward_missing_repeat_64_6():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 3
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 3
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:106:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_7 _____________________
def test_cuda_awkward_missing_repeat_64_7():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:122:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_8 _____________________
def test_cuda_awkward_missing_repeat_64_8():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 1
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 1
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:138:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_9 _____________________
def test_cuda_awkward_missing_repeat_64_9():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:154:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_10 ____________________
def test_cuda_awkward_missing_repeat_64_10():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 0
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 2, 2, 3, 0, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 0
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:170:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_11 ____________________
def test_cuda_awkward_missing_repeat_64_11():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 3
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 3
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:186:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_12 ____________________
def test_cuda_awkward_missing_repeat_64_12():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:202:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_13 ____________________
def test_cuda_awkward_missing_repeat_64_13():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 1
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 1
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_14 ____________________
def test_cuda_awkward_missing_repeat_64_14():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_15 ____________________
def test_cuda_awkward_missing_repeat_64_15():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 0
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 3, 0, 3, 5, 2, 0, 2, 1, 1])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 0
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:250:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_16 ____________________
def test_cuda_awkward_missing_repeat_64_16():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 3
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 3
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:266:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_17 ____________________
def test_cuda_awkward_missing_repeat_64_17():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:282:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_18 ____________________
def test_cuda_awkward_missing_repeat_64_18():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 1
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 1
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:298:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_19 ____________________
def test_cuda_awkward_missing_repeat_64_19():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:314:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_20 ____________________
def test_cuda_awkward_missing_repeat_64_20():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 0
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([1, 4, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 2, 4, 5, 1, 2, 3, 4, 5])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 0
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:330:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_21 ____________________
def test_cuda_awkward_missing_repeat_64_21():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 3
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 3
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:346:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_22 ____________________
def test_cuda_awkward_missing_repeat_64_22():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:362:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_23 ____________________
def test_cuda_awkward_missing_repeat_64_23():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 1
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 1
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:378:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_24 ____________________
def test_cuda_awkward_missing_repeat_64_24():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 2
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 2
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:394:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
____________________ test_cuda_awkward_missing_repeat_64_25 ____________________
def test_cuda_awkward_missing_repeat_64_25():
outindex = cupy.array([123, 123, 123, 123, 123, 123, 123, 123, 123], dtype=cupy.int64)
index = cupy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=cupy.int64)
indexlength = 3
repetitions = 3
regularsize = 0
> funcC = cupy_backend['awkward_missing_repeat', cupy.int64, cupy.int64]
index = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
indexlength = 3
outindex = array([123, 123, 123, 123, 123, 123, 123, 123, 123])
regularsize = 0
repetitions = 3
tests-cuda-kernels/test_cudaawkward_missing_repeat_64.py:410:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/build-venv/lib/python3.8/site-packages/awkward/_backends/cupy.py:38: in __getitem__
_cuda_kernels = cuda.initialize_cuda_kernels(cupy)
cuda = <module 'awkward._connect.cuda' from '/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py'>
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
index = ('awkward_missing_repeat', <class 'numpy.int64'>, <class 'numpy.int64'>)
self = <awkward._backends.cupy.CupyBackend object at 0x7f55518a8700>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
def initialize_cuda_kernels(cupy):
if cupy is not None:
global kernel
if kernel is None:
> import awkward._connect.cuda._kernel_signatures
E ModuleNotFoundError: No module named 'awkward._connect.cuda._kernel_signatures'
cupy = <module 'cupy' from '/opt/build-venv/lib/python3.8/site-packages/cupy/__init__.py'>
/opt/build-venv/lib/python3.8/site-packages/awkward/_connect/cuda/__init__.py:162: ModuleNotFoundError
=========================== short test summary info ============================
SKIPPED [1] tests-cuda/test_1276_cuda_num.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1276_cuda_transfers.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1276_cupy_interop.py:16: too old Numba version
SKIPPED [1] tests-cuda/test_1276_from_cupy.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1300_same_for_numba_cuda.py:10: could not import 'numba': No module named 'numba'
SKIPPED [1] tests-cuda/test_1381_check_errors.py:17: too old Numba version
SKIPPED [1] tests-cuda/test_1809_array_cuda_jit.py:10: could not import 'numba': No module named 'numba'
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ByteMaskedArray_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ByteMaskedArray_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray_reduce_next_fix_offsets_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArray32_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArray64_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArrayU32_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_adjust_starts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_adjust_starts_shifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_mask_ByteMaskedArray_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_count_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int16_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int8_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint16_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint8_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int16_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int8_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint16_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint8_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint8_64.py:20: Unable to generate any tests for kernel
====================== 1229 failed, 120 skipped in 51.73s ======================
``` | closed | 2023-11-15T13:55:08Z | 2023-11-15T13:57:22Z | https://github.com/scikit-hep/awkward/issues/2832 | [] | agoose77 | 0 |
ivy-llc/ivy | tensorflow | 28,371 | Fix Frontend Failing Test: jax - math.tensorflow.math.zero_fraction | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-02-21T15:44:59Z | 2024-02-26T11:24:28Z | https://github.com/ivy-llc/ivy/issues/28371 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
robotframework/robotframework | automation | 5,107 | Not using custom library name in warning when library is empty | *Using RF 7.0*
Consider following exampe:
```
*** Settings ***
Library MyLibrary AS still_empty_yet
*** Test Cases ***
Demo
No Operation
```
When executed I get a warning:
> Imported library 'MyLibrary' has no keywords.
I would expect instead of "MyLibrary" the custom name `still_empty_yet`.
The use case is that I import a library with dynamic keywords. So the library name is identical, but dynmically loaded keywords in each lib-instance are different. Thus, if I would know what the custom name of that library is would help a lot finding the empty one. | open | 2024-04-08T13:50:27Z | 2024-04-08T13:50:41Z | https://github.com/robotframework/robotframework/issues/5107 | [] | Noordsestern | 0 |
influxdata/influxdb-client-python | jupyter | 570 | Empty string in database becomes None value in Python | ### Specifications
* Client Version: 1.34.0
* InfluxDB Version: 2.6.1
* Platform: Linux
### Code sample to reproduce problem
```python
import influxdb_client
with influxdb_client.InfluxDBClient(
org = ...,
token = ...,
url = ...,
) as client:
q = f"""
from(bucket: ...)
//|> filter(fn: (r) => exists r._value)
//|> filter(fn: (r) => r._value != "")
|> last()
"""
res = [
record_it.values
for table_it in client.query_api().query(q)
for record_it in table_it.records
]
print(res)
```
### Expected behavior
[{'result': '_result', 'table': 0, '_time': datetime.datetime(2023, 3, 2, 18, 47, 7, 547036, tzinfo=tzutc()), '_value': ''}]
### Actual behavior
[{'result': '_result', 'table': 0, '_time': datetime.datetime(2023, 3, 2, 18, 47, 7, 547036, tzinfo=tzutc()), '_value': None}]
### Additional info
[`last()` function](https://docs.influxdata.com/flux/v0.x/stdlib/universe/last/):
> If \[`_value`\] column is null in the last record, last() returns the previous record with a non-null value.
My last record with a non-null `_value` has the empty string for `_value`.
So far, the row is correctly identified.
But by the time I read `_value` from the result returned in Python, the record does not hold the empty string `""` but `None`.
I verified that the value of `_value` is really the empty string in the database and not `None` by experimenting with filters (see commented code).
I find it more intuitive to leave the empty string as empty string in Python. | open | 2023-03-31T09:16:32Z | 2023-03-31T09:16:32Z | https://github.com/influxdata/influxdb-client-python/issues/570 | [
"bug"
] | hy144328 | 0 |
albumentations-team/albumentations | deep-learning | 2,100 | [Add transform] Add RandomMedianBlur | Add RandomMedianBlur that is an alias of MedianBlur and has the same API as Kornia's https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomMedianBlur | closed | 2024-11-08T15:52:57Z | 2024-11-17T01:30:23Z | https://github.com/albumentations-team/albumentations/issues/2100 | [
"enhancement"
] | ternaus | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,356 | Multithreading bot using SeleniumBase | Hello,
I'm implementing a multithreading python bot to scrape a Twitter account.
I implemented scraping using Threads (threading.Thread). In my case I want create about 20 threads that scrape a specific Twitter page account.
The logic I implemented works but the CPU goes to 100% after a few threads (about 10).
I think I'm doing the thread management wrong. How can I optimize the algorithm? Should I use py-test?
```python
from seleniumbase import SB
def start_twitter(args):
accounts = 20
threads = []
for index, account in enumerate(accounts):
# ... ...
thread = MyThread()
threads.append(thread)
thread.start()
for t in threads:
try:
t.join()
except Exception as e:
pass
class MyThread(threading.Thread):
def run(self):
with SB(uc=True, extension_dir="temp/old-twitter-layout-2023/", headless2=True) as sb:
sb.driver.uc_open("https://www.twitter.com")
# scraping
```
The ```start_twitter(args)``` method is called from an ```async def``` method.
Thanks in advance | closed | 2023-12-10T22:47:18Z | 2023-12-11T00:58:35Z | https://github.com/seleniumbase/SeleniumBase/issues/2356 | [
"duplicate",
"UC Mode / CDP Mode"
] | fashionprivate | 1 |
chainer/chainer | numpy | 8,336 | Release Tasks for v7.0.0 | Chainer
- [x] NHWC Support (#7620)
- [x] Python 3.8
- [ ] Force static linking to ChainerX #8426
CuPy
- [x] https://github.com/cupy/cupy/pull/2632
- [x] Python 3.8 https://github.com/cupy/cupy/issues/2599 | closed | 2019-10-29T07:27:03Z | 2019-12-06T04:46:54Z | https://github.com/chainer/chainer/issues/8336 | [
"issue-checked"
] | beam2d | 0 |
fbdesignpro/sweetviz | pandas | 79 | Tqdm errors while running sweetviz.compare() or analyze() in Colab | Feature: Item_Outlet_Sales (TARGET)
[ 18%] 00:00 -> (00:03 left)
TqdmDeprecationWarning: Please use `tqdm.gui.tqdm(...)` instead of `tqdm(..., gui=True)`
---------------------------------------------------------------------------
TqdmDeprecationWarning Traceback (most recent call last)
<ipython-input-38-2613e0da05df> in <module>()
----> 1 comparison_report = sv.compare([train,'Train'], [test,'Test'], target_feat='Item_Outlet_Sales',feat_cfg= feature_config)
2 comparison_report.show_notebook()
3 frames
/usr/local/lib/python3.7/dist-packages/sweetviz/sv_public.py in compare(source, compare, target_feat, feat_cfg, pairwise_analysis)
21 pairwise_analysis: str = 'auto'):
22 report = sweetviz.DataframeReport(source, target_feat, compare,
---> 23 pairwise_analysis, feat_cfg)
24 return report
25
/usr/local/lib/python3.7/dist-packages/sweetviz/dataframe_report.py in __init__(self, source, target_feature_name, compare, pairwise_analysis, fc)
199 filtered_series_names_in_source.remove(targets_found[0])
200 target_type = self._target["type"]
--> 201 self.progress_bar.update(1)
202
203 # Set final target series and sanitize targets (e.g. bool->truly bool)
/usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in update(self, *args, **kwargs)
263 def update(self, *args, **kwargs):
264 try:
--> 265 return super(tqdm_notebook, self).update(*args, **kwargs)
266 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt
267 except: # NOQA
/usr/local/lib/python3.7/dist-packages/tqdm/std.py in update(self, n)
1203 Increment to add to the internal counter of iterations
1204 [default: 1]. If using float, consider specifying `{n:.3f}`
-> 1205 or similar in `bar_format`, or specifying `unit_scale`.
1206
1207 Returns
TqdmDeprecationWarning: Please use `tqdm.gui.tqdm(...)` instead of `tqdm(..., gui=True)` | closed | 2021-02-24T07:08:09Z | 2021-02-24T12:56:55Z | https://github.com/fbdesignpro/sweetviz/issues/79 | [] | AdityaKj | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,540 | [Bug]: Disappeared "DPM++ 2M Karras" | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
(Windows 11 Pro 64 bit) After updating Automatic1111 to 1.9.0 (git pull), the "DPM++ 2M Karras" is missing in the list of the Sampling method.
### Steps to reproduce the problem
Update to 1.9.0
### What should have happened?
DPM++ 2M Karras must exist?
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Standard
### Console logs
```Shell
No special
```
### Additional information
_No response_ | closed | 2024-04-16T17:10:23Z | 2024-04-24T17:45:52Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15540 | [
"bug-report"
] | milen-prg | 3 |
huggingface/transformers | nlp | 36,461 | model.generate() produces different outputs with padding for flan-t5-small | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0a0+ecf3bae40a.nv25.01 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: no
- GPU type: NVIDIA A30
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
input_texts = [
"Translate to French: Hello, how are you?",
"Summarize: The quick brown fox jumps over the lazy dog."
]
inputs = tokenizer(input_texts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(**inputs, max_new_tokens=50)
for i, output in enumerate(outputs):
decoded_output = tokenizer.decode(output, skip_special_tokens=True)
print(f"Input {i + 1}: {input_texts[i]}")
print(f"Output {i + 1}: {decoded_output}\n")
```
Output with transformers 4.49.0:
```
Input 1: Translate to French: Hello, how are you?
Output 1:
Input 2: Summarize: The quick brown fox jumps over the lazy dog.
Output 2: The brown fox jumps over the dog.
```
I found references to similar issues (https://github.com/huggingface/transformers/issues/28385), but this one seems to be a regression between 4.48.3 and 4.49.0. I tried combinations of right and left padding with `use_cache`, but no luck.
Interestingly, `t5-small` model does not show this regression.
### Expected behavior
Output with transformers 4.48.3:
```
Input 1: Translate to French: Hello, how are you?
Output 1: Hello, c'est-ce que vous êtes?
Input 2: Summarize: The quick brown fox jumps over the lazy dog.
Output 2: The brown fox jumps over the dog.
``` | closed | 2025-02-27T18:44:57Z | 2025-02-28T17:24:25Z | https://github.com/huggingface/transformers/issues/36461 | [
"bug"
] | achartier | 2 |
google-research/bert | tensorflow | 419 | Can one expand the vocabulary for fine-tuning by replacing foreign unicode characters? | I am fine-tuning the BERT model but need to add a few thousand words. I know that one can replace the ~1000 [unused#] lines at the top of the vocab.txt, but I also notice there are thousands of single foreign characters (unicode) in the file, which I will never use. For fine-tuning, is it possible to replace those with my words, fine tune, and have model still work correctly? | open | 2019-02-06T16:51:57Z | 2021-09-17T09:14:04Z | https://github.com/google-research/bert/issues/419 | [] | bsugerman | 6 |
serengil/deepface | deep-learning | 585 | Error in finding distance between faces in real-time | I've created the embeddings and now running the script in real-time.
Once the process reaches the part where it must find distance for face recognition this is the error i get:
distance = dst.findCosineDistance(img1_representation, img2_representation)
File "/Users/---/miniforge3/envs/faceitstf/lib/python3.8/site-packages/deepface/commons/distance.py", line 4, in findCosineDistance
a = np.matmul(np.transpose(source_representation), test_representation)
ValueError: matmul: Input operand 1 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)->(n?,m?) requires 1) | closed | 2022-10-30T15:21:58Z | 2022-11-01T07:35:37Z | https://github.com/serengil/deepface/issues/585 | [
"question"
] | MKJ52 | 7 |
ageitgey/face_recognition | machine-learning | 1,372 | Cant seem to install face_rec | * face_recognition version: None
* Python version: 3.9
* Operating System: Windows 10
### Description
I can't download this module because I don't have CMake apparently. But I do have it installed please help!
### What I Did
```
> pip install face-recognition
Collecting face-recognition
Using cached face_recognition-1.3.0-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: numpy in c:\users\sitan\appdata\local\programs\python\python39\lib\site-packages (from face-recognition) (1.21.1)
Collecting dlib>=19.7
Using cached dlib-19.22.1.tar.gz (7.4 MB)
Requirement already satisfied: Click>=6.0 in c:\users\sitan\appdata\local\programs\python\python39\lib\site-packages (from face-recognition) (8.0.1)
Requirement already satisfied: face-recognition-models>=0.3.0 in c:\users\sitan\appdata\local\programs\python\python39\lib\site-packages (from face-recognition) (0.3.0)
Requirement already satisfied: Pillow in c:\users\sitan\appdata\local\programs\python\python39\lib\site-packages (from face-recognition) (8.3.1)
Requirement already satisfied: colorama in c:\users\sitan\appdata\local\programs\python\python39\lib\site-packages (from Click>=6.0->face-recognition) (0.4.4)
Building wheels for collected packages: dlib
Building wheel for dlib (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\sitan\appdata\local\programs\python\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"'; __file__='"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\sitan\AppData\Local\Temp\pip-wheel-blmf3zli'
cwd: C:\Users\sitan\AppData\Local\Temp\pip-install-346ro28p\dlib_aff7113abe1c4754af866dff9fba17ff\
Complete output (8 lines):
running bdist_wheel
running build
running build_py
package init file 'tools\python\dlib\__init__.py' not found (or not a regular file)
running build_ext
ERROR: CMake must be installed to build dlib
----------------------------------------
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib, face-recognition
Running setup.py install for dlib ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\sitan\appdata\local\programs\python\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"'; __file__='"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record
'C:\Users\sitan\AppData\Local\Temp\pip-record-4xwwh_v6\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\sitan\appdata\local\programs\python\python39\Include\dlib'
cwd: C:\Users\sitan\AppData\Local\Temp\pip-install-346ro28p\dlib_aff7113abe1c4754af866dff9fba17ff\
Complete output (8 lines):
running install
running build
running build_py
package init file 'tools\python\dlib\__init__.py' not found (or not a regular file)
running build_ext
ERROR: CMake must be installed to build dlib
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\sitan\appdata\local\programs\python\python39\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"'; __file__='"'"'C:\\Users\\sitan\\AppData\\Local\\Temp\\pip-install-346ro28p\\dlib_aff7113abe1c4754af866dff9fba17ff\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from
setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sitan\AppData\Local\Temp\pip-record-4xwwh_v6\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\sitan\appdata\local\programs\python\python39\Include\dlib' Check the logs for full command output.
```
| closed | 2021-09-20T02:53:18Z | 2021-09-25T08:09:17Z | https://github.com/ageitgey/face_recognition/issues/1372 | [] | ItsSitanshu | 4 |
sloria/TextBlob | nlp | 13 | Train classifiers from CSV and JSON files | It would be nice to train classifiers directly from files in CSV, TSV, and JSON format.
``` python
from text.classifiers import NaiveBayesClassifier
cl = NaiveBayesClassifier("sent_train.csv", format="csv")
```
| closed | 2013-08-26T02:37:38Z | 2013-09-01T22:55:05Z | https://github.com/sloria/TextBlob/issues/13 | [
"enhancement"
] | sloria | 1 |
Gozargah/Marzban | api | 866 | Bug title | **Describe the bug**
از فایل script که گذاشته شده برای نصب استفاده کردم. صفحه سفید میمونه و داشبورد کامل بالا نمیاد. pending میشه

**Machine details (please complete the following information):**
- OS: ubuntu 18.04
| closed | 2024-03-12T10:41:41Z | 2024-03-20T22:54:01Z | https://github.com/Gozargah/Marzban/issues/866 | [
"Invalid"
] | rastinrastini | 2 |
FactoryBoy/factory_boy | sqlalchemy | 305 | Option to make Faker return unique values | I see random test failures because e.g. `factory.Faker('company')` returns duplicate values (usually after a few hundred calls, but as low as the second call). To remedy this, I wrote a subclass of `Faker` that keeps track of values returned so it can ensure uniqueness. The code is fairly trivial:
```
from factory.faker import Faker
class UniqueFaker(Faker):
"""
A Faker that keeps track of returned values so it can ensure uniqueness.
"""
def __init__(self, *args, **kwargs):
super(UniqueFaker, self).__init__(*args, **kwargs)
self._values = {None}
def generate(self, extra_kwargs):
value = None
while value in self._values:
value = super(UniqueFaker, self).generate(extra_kwargs)
self._values.add(value)
return value
```
Is there any interest in either adding this subclass to factoryboy, or integrating the functionality into `Faker` itself?
| closed | 2016-05-24T08:51:10Z | 2024-05-16T16:41:43Z | https://github.com/FactoryBoy/factory_boy/issues/305 | [] | beniwohli | 12 |
xlwings/xlwings | automation | 2,082 | Add-in functions appear as links | #### OS: Windows 10
#### Versions of xlwings, Excel and Python: 0.26.2, Office 365, Python 3.8.10
The add-in `demo.xlam` was developed and deployed to `C:\Users\<username>\AppData\Roaming\Microsoft\Excel\XLSTART` directory. It works well in a new Excel sheet, the deployed UDFs could be used, but only for the first time, then,
The issue is: When the Excel file is re-opened, all the functions appear as links like `'C:\Users\<username>\AppData\Roaming\Microsoft\Excel\XLSTART\demo.xlam'!MySum(A2)`, and with a pop-up message dialog saying "this workbook contains links to one or more external sources that could be unsafe...".
What I observed was: After the working Excel file closed, the process could still be found from the Task Manager, if killing it and restarting the Excel file, the UDFs work as expected again, otherwise, all turn into links described above.
But how can I make it work without every time do kill&restart? Am I missing anything?
Thanks.
| closed | 2022-11-01T14:56:59Z | 2022-11-02T14:06:49Z | https://github.com/xlwings/xlwings/issues/2082 | [] | zhangt58 | 2 |
StratoDem/sd-material-ui | dash | 40 | Port AutoComplete to Dash component | http://www.material-ui.com/#/components/auto-complete | closed | 2018-01-25T21:30:58Z | 2018-04-21T12:14:19Z | https://github.com/StratoDem/sd-material-ui/issues/40 | [
"Tech: JS",
"Priority: Low",
"Tech: Single Component"
] | mjclawar | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.