repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
DistrictDataLabs/yellowbrick | scikit-learn | 943 | KElbow raises confusing `ValueError` when optimal k is outside provided k-range | **Describe the bug**
We seem to have a bug in our updated `KElbow` visualizer following the updates in #813 and #935. When the `locate_elbow` param is set to `True` (which it is by default), we get a `ValueError` when calling `fit` on the visualizer.
**To Reproduce**
```python
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from yellowbrick.cluster import KElbowVisualizer
X, y = make_blobs(centers=12, n_samples=1000, n_features=16, shuffle=True)
viz = KElbowVisualizer(KMeans(), k=(4,12))
viz.fit(X)
```
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/cluster/elbow.py", line 334, in fit
self.k_values_, self.k_scores_, **locator_kwargs
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/utils/kneed.py", line 126, in __init__
self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
ValueError: min() arg is an empty sequence
```
**Dataset**
`make_blobs`
**Expected behavior**
`fit` should fit the `KElbow` visualizer without raising an error.
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.7
- Yellowbrick Version v1.0 dev
| closed | 2019-08-08T16:02:16Z | 2019-08-15T19:10:55Z | https://github.com/DistrictDataLabs/yellowbrick/issues/943 | [] | rebeccabilbro | 1 |
pywinauto/pywinauto | automation | 547 | uia APP.kill() bug | I'm not sure about this, but my observation is that `APP.kill()` depending on the state of the program throws an error or "halfly terminates" probably because of trying to press ESC on a non existing control/window.
Example 1 : windows magnifier. This only happens, when you click on the dropdown menu AND the window itself was focused.
Example 2 : OBS. This only happens, when the Stats dialog is open AND the OBS window itself was focused.
Example 3 : windows magnifier half termination -> simple example code is `APP = pywinauto.Application(backend="uia").start("magnify.exe")` and `APP.kill()`. If you have docked magnifier, the top of the screen remains "docked". Also I have high confidence in that the application still lingers in the background/memory (huge performance drop while writing this issue), have no idea if this is directly related to pywinatuo `APP.kill()`.
**Example 1 code (Make sure to rename "Nagyito" to Magnifier maybe, name of the window and "Nezetek" to Views maybe, the dropdown with 3 values):**
```
import pywinauto
from pywinauto import actionlogger
actionlogger.enable()
APP = pywinauto.Application(backend="uia").start("magnify.exe")
APP.Nagyito.set_focus() # name of the window maybe Magnifier
APP.Nagyito.Nezetek.click_input() # name of the dropdown maybe Views
APP.kill()
```
**Result of above code:**
```
2018-08-15 11:48:47,341 INFO: Started magnify.exe application.
2018-08-15 11:48:48,710 INFO: Clicked Button "Nézetek" by left button mouse click at (267, 354)
2018-08-15 11:48:49,165 INFO: Typed text to the Menu: {ESC}
Traceback (most recent call last):
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\uia_defines.py", line 232, in get_elem_interface
iface = cur_ptrn.QueryInterface(cls_name)
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\comtypes\__init__.py", line 1158, in QueryInterface
self.__com_QueryInterface(byref(iid), byref(p))
ValueError: NULL COM pointer access
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 434, in close
iface = self.iface_window
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 132, in __get__
value = self.fget(obj)
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 322, in iface_window
return uia_defs.get_elem_interface(elem, "Window")
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\uia_defines.py", line 234, in get_elem_interface
raise NoPatternInterfaceError()
pywinauto.uia_defines.NoPatternInterfaceError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/oliver.horvath/PycharmProjects/pywinautogithub/main4.py", line 10, in <module>
APP.kill()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\application.py", line 1247, in kill
win.close()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 440, in close
self.type_keys("{ESC}")
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\base_wrapper.py", line 882, in type_keys
self.verify_actionable()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\base_wrapper.py", line 626, in verify_actionable
self.verify_visible()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\base_wrapper.py", line 649, in verify_visible
raise ElementNotVisible()
pywinauto.base_wrapper.ElementNotVisible
```
**Example 2 is with OBS:**
1. Enable the option "Open Stats dialog on startup" in General, 2nd option maybe.
```
import pywinauto
from pywinauto import actionlogger
actionlogger.enable()
APP = pywinauto.Application(backend="uia").start(work_dir=r"C:\Program Files (x86)\obs-studio\bin\64bit", cmd_line=r"C:\Program Files (x86)\obs-studio\bin\64bit\obs64.exe")
APP.Dialog.set_focus() # OBS window
APP.kill()
```
**Result of above code:**
```
Traceback (most recent call last):
File "C:/Users/oliver.horvath/PycharmProjects/pywinautogithub/main4.py", line 11, in <module>
APP.kill()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\application.py", line 1247, in kill
win.close()
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 434, in close
iface = self.iface_window
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 132, in __get__
value = self.fget(obj)
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\controls\uiawrapper.py", line 322, in iface_window
return uia_defs.get_elem_interface(elem, "Window")
File "C:\Users\oliver.horvath\venv\pywinautogithub\lib\site-packages\pywinauto\uia_defines.py", line 231, in get_elem_interface
cur_ptrn = element_info.GetCurrentPattern(ptrn_id)
_ctypes.COMError: (-2147220991, 'Az esemény nem tudott meghívni előfizetőt', (None, None, None, 0, None))
```
Translation of last line: "The event was unable to invoke a subscriber". | closed | 2018-08-15T10:09:16Z | 2022-05-12T16:44:38Z | https://github.com/pywinauto/pywinauto/issues/547 | [
"bug",
"Priority-Critical"
] | meshuggahtas | 12 |
xlwings/xlwings | automation | 1,622 | add app.cut_copy_mode | should be used in reports package | closed | 2021-06-18T09:41:00Z | 2021-06-24T12:45:18Z | https://github.com/xlwings/xlwings/issues/1622 | [
"enhancement"
] | fzumstein | 0 |
ultralytics/yolov5 | deep-learning | 12,690 | Yolov5 object detection and classification in a single script | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
How can I merge yolov5 object detection and classification in a single script? My task is to first detect the roi of the object using object detection and the use the roi as input to classify the object.How can i do this?
### Additional
_No response_ | closed | 2024-01-30T13:53:58Z | 2024-10-20T19:38:37Z | https://github.com/ultralytics/yolov5/issues/12690 | [
"question",
"Stale"
] | humairaneha | 3 |
pbugnion/gmaps | jupyter | 342 | Error displaying widget: model not found | Hi,
after updating jupyter & co. I'm not able anymore to show the map, I got instead:
**Error displaying widget: model not found**
In the console there is this error:
**Unhandled Promise Rejection: Error: Module jupyter-gmaps, semver range 0.9.0 is not registered as a widget module**
Is there something that I can do to fix it?
Thank you
```
Package Version
------------------- -----------
appdirs 1.4.4
appnope 0.1.0
appscript 1.1.0
arcgis 1.8.1
asn1crypto 1.3.0
astroid 2.4.2
attrs 19.3.0
Babel 2.8.0
backcall 0.2.0
beautifulsoup4 4.9.1
black 19.10b0
bleach 3.1.5
bottle 0.12.18
bqplot 0.12.13
branca 0.4.1
bs4 0.0.1
certifi 2020.6.20
cffi 1.14.0
chardet 3.0.4
click 7.1.2
colorama 0.4.3
crc16 0.1.1
cryptography 2.9.2
cycler 0.10.0
dataclasses 0.7
decorator 4.4.2
defusedxml 0.6.0
distlib 0.3.0
docutils 0.16
ebaysdk 2.2.0
entrypoints 0.3
fastapi 0.58.0
feedparser 5.2.1
filelock 3.0.12
flatlib 0.2.1
future 0.18.2
geographiclib 1.50
geojson 2.5.0
geopy 1.22.0
gevent 20.6.2
Glances 3.1.4.1
gmaps 0.9.0
googlemaps 4.4.1
greenlet 0.4.16
gunicorn 20.0.4
html5lib 1.1
idna 2.9
importlib-metadata 1.6.1
importlib-resources 2.0.1
inflection 0.5.0
ipykernel 5.3.0
ipyleaflet 0.13.0
ipysheet 0.4.4
ipython 7.15.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
isort 4.3.21
jedi 0.17.1
Jinja2 2.11.2
json5 0.9.5
jsonpickle 1.4.1
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.3
jupyter-console 6.1.0
jupyter-core 4.6.3
jupyterlab 2.1.5
jupyterlab-launcher 0.13.1
jupyterlab-server 1.1.5
keyring 21.2.1
kiwisolver 1.2.0
lazy-object-proxy 1.5.0
lerc 0.1.0
lxml 4.5.1
MarkupSafe 1.1.1
matplotlib 3.2.2
mccabe 0.6.1
mistune 0.8.4
more-itertools 8.4.0
mpmath 1.1.0
nbconvert 5.6.1
nbformat 5.0.7
ndg-httpsclient 0.5.1
notebook 6.0.3
ntlm-auth 1.5.0
numpy 1.19.0
oauthlib 3.1.0
ortools 7.7.7810
packaging 20.4
pandas 1.0.5
pandocfilters 1.4.2
parso 0.7.0
pathspec 0.8.0
pbr 5.4.5
pexpect 4.8.0
pickleshare 0.7.5
pip 20.1.1
prometheus-client 0.8.0
prompt-toolkit 3.0.5
protobuf 3.12.2
psutil 5.7.0
psycopg2-binary 2.8.5
ptyprocess 0.6.0
pyasn1 0.4.8
pycparser 2.20
pydal 20200531.3
pydantic 1.5.1
Pygments 2.6.1
PyJWT 1.7.1
pylint 2.5.3
pymongo 3.10.1
pyOpenSSL 19.1.0
pyparsing 2.4.7
pyrsistent 0.16.0
pyshp 2.1.0
PySide2 5.15.0
pyswisseph 2.8.0.post1
python-dateutil 2.8.1
python-digitalocean 1.15.0
pytz 2020.1
PyYAML 5.3.1
pyzmq 19.0.1
qgrid 1.3.1
qtconsole 4.7.5
QtPy 1.9.0
Quandl 3.5.0
redis 3.5.3
regex 2020.6.8
reloader 0.6
requests 2.24.0
requests-ntlm 1.1.0
requests-oauthlib 1.3.0
requests-toolbelt 0.9.1
scipy 1.5.0
seaborn 0.10.1
Send2Trash 1.5.0
setuptools 47.3.1
shiboken2 5.15.0
simplegeneric 0.8.1
six 1.15.0
slack-cleaner 0.7.4
slacker 0.14.0
soupsieve 2.0.1
starlette 0.13.4
stevedore 2.0.1
sympy 1.6
terminado 0.8.3
testpath 0.4.4
toml 0.10.1
tornado 6.0.4
traitlets 4.3.3
traittypes 0.2.1
typed-ast 1.4.1
urllib3 1.25.9
version-information 1.0.3
virtualenv 20.0.25
virtualenv-clone 0.5.4
virtualenvwrapper 4.8.4
walrus 0.8.1
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.34.2
widgetsnbextension 3.5.1
wrapt 1.12.1
xarray 0.15.1
xlrd 1.2.0
XlsxWriter 1.2.9
xlwings 0.19.4
xlwt 1.3.0
yatl 20200430.1
zipp 3.1.0
zope.event 4.4
zope.interface 5.1.0
``` | open | 2020-06-26T10:59:39Z | 2022-07-02T23:14:21Z | https://github.com/pbugnion/gmaps/issues/342 | [] | mbelletti | 9 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 185 | idphoto接口有几个参数没有使用Form() | 在学习大佬的代码时发现 `/idphoto`接口的入参里有几个参数没有使用 `fastapi`的`Form()` 处理。这样有一个问题是在前端调用时 虽然通过`new FormData()`传递了 但是fastapi接收不到
```js
// 前端调用
const formdata = new FormData()
formdata.append('head_measure_ratio', '0.6')
formdata.append('head_height_ratio', '0.6')
formdata.append('top_distance_max', '0.6')
formdata.append('top_distance_min', '0.6')
fetch('/idphoto', formdata)
```
```python
# py服务
@app.post("/idphoto")
async def idphoto_inference(
input_image: UploadFile = File(None),
input_image_base64: str = Form(None),
height: int = Form(413),
width: int = Form(295),
human_matting_model: str = Form("modnet_photographic_portrait_matting"),
face_detect_model: str = Form("mtcnn"),
hd: bool = Form(True),
dpi: int = Form(300),
face_align: bool = Form(False),
head_measure_ratio: float = 0.2,
head_height_ratio: float = 0.45,
top_distance_max: float = 0.12,
top_distance_min: float = 0.10,
):
```
经过尝试最后四个参数只能通过query传递,请问大佬这四个参数就是设计成query传递吗?
```js
fetch('/idphoto?head_measure_ratio=0.6', formdata)
```
在url上携带query的方式是可以传递到py接口上的 | closed | 2024-10-08T03:20:27Z | 2024-10-13T02:27:15Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/185 | [] | shen774411223d | 1 |
explosion/spaCy | machine-learning | 12,487 | missing documentation for util.decaying | <!-- Describe the problem or suggestion here. If you've found a mistake and you know the answer, feel free to submit a pull request straight away: https://github.com/explosion/spaCy/pulls -->
I am able to import util.decaying:
from spacy.util import decaying
and even run it:
dropout_rates = decaying(0.2, 0.005)
But I only found out about that I need two not three parameters, as stated here https://www.tutorialspoint.com/spacy/spacy_util_decaying.htm, through trial-and-error and by getting error messages when executing.
I was not able to find any documentation for this method in your SpaCy documentation.
## Which page or section is this issue related to?
<!-- Please include the URL and/or source. --> https://spacy.io/api/top-level#util
| closed | 2023-03-29T20:42:11Z | 2023-05-08T00:02:20Z | https://github.com/explosion/spaCy/issues/12487 | [
"docs"
] | mevol | 6 |
dsdanielpark/Bard-API | nlp | 272 | Multiple-Cookies has been changed. | Hello!
it seem google have changed this again. now the value dont end with an . anymore.
i have looked at older issues, i did try to use version 0.1.39 but no go. the as it seem to expect the value to end with an . as i am getting this error with it
```bash
__Secure-1PSID value should end with a single dot. Enter correct __Secure-1PSID value.
```
My region EU / scandinavia
I also did have an open old session in another browser that still used the older value that ended with dot, but that stoped working today.
is there any fix for this ?
New value looks like this:
```bash
g.a000xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx0076
```
also when i looked at the Multi-cookie Bard example, one of the values does not exist anymore "__Secure-1PSIDTS".
These are the new values:

| closed | 2024-02-01T19:27:05Z | 2024-03-05T08:25:09Z | https://github.com/dsdanielpark/Bard-API/issues/272 | [] | mathisen99 | 15 |
HumanSignal/labelImg | deep-learning | 504 | Bug: Fit Width doesn't stays active | If I am using the option "Fit Width" it works. But if I change to the next picture the zoom is resetted. I have to deactivate the "Fit Width" option and activate it again that it works. Because this is no good practice when annotating 5000 pictures once this was no option for me. I was annotating with the resetted zoom factor all pictures.
- **OS:** Windows 10 x64 1903
- **PyQt version:** 5.12.3 | closed | 2019-09-15T11:55:21Z | 2022-06-15T22:03:33Z | https://github.com/HumanSignal/labelImg/issues/504 | [] | butcher211 | 1 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 242 | getting timeout 504 in 1 minute | only happens on dev, when I process a csv then does not happen in dev
what could be the reason for this? | closed | 2023-05-11T00:16:26Z | 2024-08-25T04:01:51Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/242 | [] | ravi160822 | 0 |
pydata/pandas-datareader | pandas | 763 | ModuleNotFoundError: No module named 'pandas_datareader' | I know this issue has been solved in so many posts, but none of them has fixed mine. I do not understand why but when I try to call pandas_datareader is always running this error. I'm coding in Sublime Text 3 in a Mac OS. My python version is 3.8, my panda version is 0.25.0 and the pandas_datareader's version is 0.8.1.
It runs panda and all the libraries I have installed except for this one. If you need the code I can post it also.
I think maybe the problem is installing the pandas_datareader because when I install it, it doesn't really install it, it just collects it from don't know where:
<img width="1440" alt="Captura de pantalla 2020-03-26 a las 18 09 31" src="https://user-images.githubusercontent.com/61276146/77675571-67287d00-6f8d-11ea-9383-6b71c1bbc511.png">
Thank you very much | closed | 2020-03-26T17:05:02Z | 2020-07-06T23:42:38Z | https://github.com/pydata/pandas-datareader/issues/763 | [] | Moratiya | 0 |
streamlit/streamlit | data-visualization | 10,257 | Chart builder | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
It would be nice if Chart Builder feature can be bumped up in the priority list.
### Why?
This feature could be a game changer for data analyst and scientists who are quite new to programming.
### How?
_No response_
### Additional Context
_No response_ | open | 2025-01-27T02:37:34Z | 2025-01-27T11:48:05Z | https://github.com/streamlit/streamlit/issues/10257 | [
"type:enhancement",
"feature:charts"
] | dmslowmo | 1 |
deepset-ai/haystack | nlp | 8,863 | Explain deprecation of `dataframe` field in documentation | We should explain here that dataframe is deprecated. https://docs.haystack.deepset.ai/docs/data-classes#document
We will need to fully remove the dataframe field from the explanation after the 2.11 release.
| closed | 2025-02-14T15:40:53Z | 2025-02-17T12:20:19Z | https://github.com/deepset-ai/haystack/issues/8863 | [
"type:documentation",
"P1"
] | julian-risch | 0 |
awesto/django-shop | django | 651 | Error if email field is not unique | The [docs](http://django-shop.readthedocs.io/en/latest/reference/customer-model.html?highlight=user_email#caveat-when-using-this-alternative-user-model) quote:
> The savvy reader may have noticed that in email_auth.models.User, the email field is not declared as unique. This by the way causes Django to complain during startup with:
However, I see that this is not a warning, it is an error:
```
$./manage.py makemigrations
SystemCheckError: System check identified some issues:
ERRORS:
email_auth.User: (auth.E003) 'User.email' must be unique because it is named as the 'USERNAME_FIELD'.
```
settings.py
```
AUTH_USER_MODEL = 'email_auth.User'
HOP_GUEST_IS_ACTIVE_USER = True
SILENCED_SYSTEM_CHECKS = ['auth.W004']
```
Shall I make it unique or did I miss a setting? | closed | 2017-09-12T13:50:06Z | 2017-09-14T13:44:23Z | https://github.com/awesto/django-shop/issues/651 | [] | raratiru | 14 |
home-assistant/core | python | 140,860 | Ollama uses the wrong model when using a dash and the model without dash exists | ### The problem
Hi, I am trying to use the phi4-mini model with Home Assistant.
I added by mistake the phi4 model, then I realized only the mini has support for tools, so I added that as well.
I subsequently removed the phi4, but the error is the same.
When I try to run a base query against the assistant, or directly in the ollama item, I get the generic error, and the log shows it's using `phi4` instead of `phi4-mini`:
```
Registratore: homeassistant.components.assist_pipeline.pipeline
Fonte: components/assist_pipeline/pipeline.py:1165
Integrazione: Assist pipeline (documentazione, problemi)
Prima occorrenza: 17 marzo 2025 alle ore 19:33:54 (16 occorrenze)
Ultimo accesso: 08:19:19
Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 1165, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
)
^
File "/usr/src/homeassistant/homeassistant/components/conversation/agent_manager.py", line 117, in async_converse
result = await method(conversation_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/conversation/entity.py", line 47, in internal_async_process
return await self.async_process(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/ollama/conversation.py", line 219, in async_process
return await self._async_handle_message(user_input, chat_log)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/ollama/conversation.py", line 276, in _async_handle_message
[
...<4 lines>...
]
File "/usr/src/homeassistant/homeassistant/components/conversation/chat_log.py", line 277, in async_add_delta_content_stream
async for delta in stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^
...<45 lines>...
self.delta_listener(self, delta) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/ollama/conversation.py", line 147, in _transform_stream
async for response in result:
...<18 lines>...
yield chunk
File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 672, in inner
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: registry.ollama.ai/library/phi4:latest does not support tools (status code: 400)
```
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Ollama
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-18T08:35:07Z | 2025-03-18T08:35:07Z | https://github.com/home-assistant/core/issues/140860 | [] | canepan | 0 |
developmentseed/lonboard | data-visualization | 194 | Auto-downcast numeric attribute types in `from_geopandas` | Check for float, signed int, unsigned int data types, and call `pd.to_numeric(downcast=...)`.
It would be nice to check if this works with pyarrow-based data types as well.
This should be a kwarg, maybe named `auto_downcast: bool = True`? | closed | 2023-11-02T15:20:14Z | 2023-11-02T16:46:36Z | https://github.com/developmentseed/lonboard/issues/194 | [] | kylebarron | 0 |
liangliangyy/DjangoBlog | django | 423 | ubuntu nginx 部署后 502 Bad Gateway ,Gunicorn 能跑通,就是nginx出错,导致后面supervisor也出错,都是按照您的配置来的,希望您能指点一下 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [x] 请求技术支持
| closed | 2020-09-17T16:59:07Z | 2021-07-12T08:30:26Z | https://github.com/liangliangyy/DjangoBlog/issues/423 | [] | internet96 | 5 |
JoeanAmier/TikTokDownloader | api | 185 | 是否支持账号数据采集导出 csv,json 格式 | 获取视频数据后,不一定有下载需求。是否提供采集数据和下载可以单独选择或者组合选择 | open | 2024-03-22T07:24:32Z | 2024-04-16T02:28:52Z | https://github.com/JoeanAmier/TikTokDownloader/issues/185 | [] | gogelabs | 4 |
serengil/deepface | deep-learning | 510 | hi,bro, any module for train? or this code don't support to train or must diy | closed | 2022-07-14T09:24:16Z | 2022-07-14T12:14:14Z | https://github.com/serengil/deepface/issues/510 | [
"question"
] | wlz987 | 1 | |
coqui-ai/TTS | python | 3,833 | [Bug] Error with torch.isin() in Docker Container with transformers Library | ### Describe the bug
When running the application inside a Docker container, an error occurs related to the torch.isin() method within the transformers library. The error does not occur when running the application locally (outside of the container), suggesting a possible incompatibility or issue with the dependencies inside the Docker container.
### To Reproduce
Build the Docker image using the provided Dockerfile.
Dockerfile:
```dockerfile
FROM python:3.11.8-slim
ENV PYTHONUNBUFFERED=1
# Install system dependencies and Rust
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
curl \
libsndfile1 \
libgomp1 \
pkg-config \
libssl-dev && \
curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
ENV COQUI_TOS_AGREED=1
# Update pip to the latest version
RUN pip install --upgrade pip
# Install Python dependencies
RUN pip install --no-cache-dir fastapi uvicorn torch==2.2.0 torchaudio==2.2.0 transformers==4.43.1 numpy==1.24.3 TTS==0.22.0 sudachipy cutlet
RUN pip install --upgrade transformers
# Copy the FastAPI application code
COPY main.py /app/main.py
WORKDIR /app
EXPOSE 8001
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8001"]
```
main.py:
```
import io
import os
import wave
import torch
import numpy as np
from fastapi import FastAPI, Request, Header, Body
from fastapi.responses import StreamingResponse
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
from TTS.utils.generic_utils import get_user_data_dir
from TTS.utils.manage import ModelManager
# Set the number of threads and device
torch.set_num_threads(int(os.environ.get("NUM_THREADS", os.cpu_count())))
device = torch.device("cuda" if torch.cuda.is_available() and os.environ.get("USE_CPU", "0") == "0" else "cpu")
# Load custom model if available, otherwise download the default model
custom_model_path = os.environ.get("CUSTOM_MODEL_PATH", "/app/tts_models")
if os.path.exists(custom_model_path) and os.path.isfile(custom_model_path + "/config.json"):
model_path = custom_model_path
print("Loading custom model from", model_path, flush=True)
else:
print("Loading default model", flush=True)
model_name = "tts_models/multilingual/multi-dataset/xtts_v2"
print("Downloading XTTS Model:", model_name, flush=True)
ModelManager().download_model(model_name)
model_path = os.path.join(get_user_data_dir("tts"), model_name.replace("/", "--"))
print("XTTS Model downloaded", flush=True)
# Load model configuration and model
print("Loading XTTS", flush=True)
config = XttsConfig()
config.load_json(os.path.join(model_path, "config.json"))
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir=model_path, eval=True, use_deepspeed=True if device == "cuda" else False)
model.to(device)
print("XTTS Loaded.", flush=True)
# Initialize FastAPI
app = FastAPI(
title="XTTS Streaming server",
description="XTTS Streaming server",
version="0.0.1",
docs_url="/",
)
# Helper functions
def postprocess(wav):
if isinstance(wav, list):
wav = torch.cat(wav, dim=0)
wav = wav.clone().detach().cpu().numpy()
wav = wav[None, : int(wav.shape[0])]
wav = np.clip(wav, -1, 1)
wav = (wav * 32767).astype(np.int16)
return wav
def wav_data_generator(frame_input, sample_rate=24000, sample_width=2, channels=1):
wav_buf = io.BytesIO()
with wave.open(wav_buf, "wb") as vfout:
vfout.setnchannels(channels)
vfout.setsampwidth(sample_width)
vfout.setframerate(sample_rate)
vfout.writeframes(frame_input)
wav_buf.seek(0)
return wav_buf.read()
# Streaming generator
def predict_streaming_generator(text, language, add_wav_header, stream_chunk_size):
speaker_name = "Alison Dietlinde"
speaker_raw = model.speaker_manager.speakers[speaker_name]["speaker_embedding"].cpu().squeeze().half().tolist()
gpt_raw = model.speaker_manager.speakers[speaker_name]["gpt_cond_latent"].cpu().squeeze().half().tolist()
speaker_embedding = torch.tensor(speaker_raw).unsqueeze(0).unsqueeze(-1)
gpt_cond_latent = torch.tensor(gpt_raw).reshape((-1, 1024)).unsqueeze(0)
chunks = model.inference_stream(
text,
language,
gpt_cond_latent,
speaker_embedding,
stream_chunk_size=int(stream_chunk_size),
enable_text_splitting=True
)
for i, chunk in enumerate(chunks):
chunk = postprocess(chunk)
if i == 0 and add_wav_header:
yield wav_data_generator(b"")
yield chunk.tobytes()
else:
yield chunk.tobytes()
# FastAPI endpoint for streaming
@app.post("/tts_stream")
async def predict_streaming_endpoint(
text: str = Header(...),
language: str = Header(...),
add_wav_header: bool = Header(True),
stream_chunk_size: str = Header("20")
):
try:
return StreamingResponse(
predict_streaming_generator(text,language, add_wav_header, stream_chunk_size),
media_type="audio/wav"
)
except Exception as e:
raise
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)
```
Start the Docker container.
Make a POST request to the /tts_stream endpoint with the appropriate headers and data.
test.py:
```
import argparse
import json
import shutil
import subprocess
import sys
import time
from typing import Iterator
import requests
def is_installed(lib_name: str) -> bool:
lib = shutil.which(lib_name)
if lib is None:
return False
return True
def save(audio: bytes, filename: str) -> None:
with open(filename, "wb") as f:
f.write(audio)
def stream_ffplay(audio_stream, output_file, save=True):
if not save:
ffplay_cmd = ["ffplay", "-nodisp", "-probesize", "1024", "-autoexit", "-"]
else:
print("Saving to ", output_file)
ffplay_cmd = ["ffmpeg", "-probesize", "1024", "-i", "-", output_file]
ffplay_proc = subprocess.Popen(ffplay_cmd, stdin=subprocess.PIPE)
for chunk in audio_stream:
if chunk is not None:
ffplay_proc.stdin.write(chunk)
# close on finish
ffplay_proc.stdin.close()
ffplay_proc.wait()
def tts(text, language, server_url, stream_chunk_size) -> Iterator[bytes]:
start = time.perf_counter()
headers = {
"text": text,
"language": language,
"add_wav_header": "False",
"stream_chunk_size": stream_chunk_size,
}
res = requests.post(
f"{server_url}/tts_stream",
headers=headers,
stream=True
)
end = time.perf_counter()
print(f"Time to make POST: {end-start}s", file=sys.stderr)
if res.status_code != 200:
print("Error:", res.text)
sys.exit(1)
first = True
for chunk in res.iter_content(chunk_size=512):
if first:
end = time.perf_counter()
print(f"Time to first chunk: {end-start}s", file=sys.stderr)
first = False
if chunk:
yield chunk
print("⏱️ response.elapsed:", res.elapsed)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--text",
default="It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
help="text input for TTS"
)
parser.add_argument(
"--language",
default="en",
help="Language to use default is 'en' (English)"
)
parser.add_argument(
"--output_file",
default=None,
help="Save TTS output to given filename"
)
parser.add_argument(
"--ref_file",
default=None,
help="Reference audio file to use, when not given will use default"
)
parser.add_argument(
"--server_url",
default="http://localhost:8000",
help="Server url http://localhost:8000 default, change to your server location "
)
parser.add_argument(
"--stream_chunk_size",
default="20",
help="Stream chunk size , 20 default, reducing will get faster latency but may degrade quality"
)
args = parser.parse_args()
with open("./default_speaker.json", "r") as file:
speaker = json.load(file)
if args.ref_file is not None:
print("Computing the latents for a new reference...")
audio = stream_ffplay(
tts(
args.text,
args.language,
args.server_url,
args.stream_chunk_size
),
args.output_file,
save=bool(args.output_file)
)
```
CMD:
```python test.py --text "This is a Test." --language en --server_url "http://localhost:8001" --stream_chunk_size 145```
### Expected behavior
_No response_
### Logs
```shell
TypeError: isin() received an invalid combination of arguments - got (test_elements=int, elements=Tensor, ), but expected one of:
* (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)
```
### Environment
```shell
transformers: 4.43.1
torch: 2.2.0
torchaudio: 2.2.0
TTS: 0.22.0
Platform: Docker
```
### Additional context
_No response_ | closed | 2024-07-23T21:26:45Z | 2024-07-24T18:30:14Z | https://github.com/coqui-ai/TTS/issues/3833 | [
"bug"
] | Fledermaus-20 | 3 |
saulpw/visidata | pandas | 2,013 | [agg-sum] sum() takes no keyword arguments | **Small description**
sum aggregator no longer working. when applied to a column and a freq sheet is opened you get the following error in the sum agg col: `sum() takes no keyword arguments`
**Expected result**
sum aggregation on columns
**Actual result with screenshot**
```
Traceback (most recent call last):
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/threads.py", line 200, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/aggregators.py", line 183, in memo_aggregate
dispval = col.format(typedval)
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/column.py", line 229, in format
return self.make_formatter()(*args, **kwargs)
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/column.py", line 209, in _format_len
return self.formatValue(typedval, **kwargs)
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/column.py", line 244, in formatValue
return vd.getType(self.type).formatter(self.fmtstr, typedval)
File "/Users/geekscrapy/envs/py37/lib/python3.7/site-packages/visidata/type_floatsi.py", line 10, in SIFormatter
while abs(val) > 1000:
TypeError: bad operand type for abs(): 'TypedWrapper'
```
**Steps to reproduce with sample data and a .vd**
```
echo -e "A,B,C\n1,2,3" | vd -f csv -N
z+, sum
``` | closed | 2023-08-29T15:13:14Z | 2023-10-10T04:59:03Z | https://github.com/saulpw/visidata/issues/2013 | [
"bug",
"can't reproduce"
] | geekscrapy | 10 |
google-research/bert | nlp | 499 | Bert movie review - prediction's on same record are changing widely for each run | I ran the bert movie review notebook in google colab . Everytime I run the prediction , I'm getting new set of results with different confidence scores. Any reasons for the instability . After running it twice , my output was
[('That movie was absolutely awful', array([-5.1986515e-03, -5.2619457e+00], dtype=float32), 'Negative'), ('The acting was a bit lacking', array([-0.24452195, -1.5282212 ], dtype=float32), 'Negative'), ('The film was creative and surprising', array([-3.596842 , -0.0277928], dtype=float32), 'Positive'), ('Absolutely fantastic!', array([-5.3466649e+00, -4.7754287e-03], dtype=float32), 'Positive')]
[('That movie was absolutely awful', array([-4.5516458e-03, -5.3945374e+00], dtype=float32), 'Negative'), ('The acting was a bit lacking', array([-0.14651287, -1.9930043 ], dtype=float32), 'Negative'), ('The film was creative and surprising', array([-3.3452613 , -0.03588735], dtype=float32), 'Positive'), ('Absolutely fantastic!', array([-5.3226805e+00, -4.8915716e-03], dtype=float32), 'Positive')] | open | 2019-03-14T12:05:50Z | 2019-03-14T12:06:56Z | https://github.com/google-research/bert/issues/499 | [] | tvinith | 0 |
joke2k/django-environ | django | 254 | Code block is missing on RTFD for Multiple env files | The code block is missing for documentation on RTFD.
Please refer to the screenshot below:
**Github README**:
<img width="903" alt="github" src="https://user-images.githubusercontent.com/665982/77299664-c7b68080-6d27-11ea-976e-395148164f59.png">
https://github.com/joke2k/django-environ/tree/dc2d281e45eb41a723549e01e435ae1fe89b0a96#multiple-env-files
**RTFD**:
<img width="706" alt="readthedocs" src="https://user-images.githubusercontent.com/665982/77299687-ce44f800-6d27-11ea-8efe-594eb28a6407.png">
https://django-environ.readthedocs.io/en/latest/#multiple-env-files | closed | 2020-03-23T09:12:03Z | 2021-09-07T20:21:38Z | https://github.com/joke2k/django-environ/issues/254 | [
"bug",
"documentation"
] | chitak | 2 |
thp/urlwatch | automation | 375 | how to completely remove a job | I have used the --delete option, but I have noted that if I add it back urlwatch does not recognize it as new, somehow has kept memory of it! (I am trying to debug my sendmail notification, so this is standing in the way) | closed | 2019-04-13T00:50:52Z | 2020-07-10T13:45:39Z | https://github.com/thp/urlwatch/issues/375 | [] | foice | 5 |
miguelgrinberg/Flask-SocketIO | flask | 1,070 | Where to put the code to stop a background thread? | I have been unsuccessfully searching for quite a long time now to find the recommended way of stopping a background thread that has been started with something like:
```
socketio = SocketIO(app, async_mode='gevent')
background_thread = socketio.start_background_task(target=bg_task.watcher)
```
Can somebody tell me where to put a piece of code like:
```
bg_task.stop()
background_thread.join()
``` | closed | 2019-09-28T16:47:39Z | 2023-07-04T09:53:03Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1070 | [
"question"
] | johschmitz | 18 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 33 | 执行完 Step 2: 合并LoRA权重,生成全量模型权重,输出路径中找不到config.json,导致模型无法加载 | does not appear to have a file named config.json. | closed | 2023-04-03T03:24:28Z | 2023-04-04T12:34:38Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/33 | [] | candowu | 5 |
unit8co/darts | data-science | 1,924 | Support for probabilistic predictions with historical_forecasts | I want to use the historical_forecasts method instead of calling fit and predict to evaluate the forecasting models over the whole timeseries. However, in this method there is no support for setting num_samples to obtain a probablistic prediction like in the predict method. So the timeseries returned by historical_forecasts doesn't support calling quantile_df like the timeseries returned by predict. Is that a missing feature by the package or is there a way to overcome this limitation? Is there a reason for this limitation? | closed | 2023-07-28T15:41:28Z | 2023-07-29T01:01:16Z | https://github.com/unit8co/darts/issues/1924 | [
"question"
] | ramiyaari | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 259 | can i use My intel HD graphic 620 instate cuda | closed | 2020-01-04T18:02:03Z | 2020-07-04T23:27:42Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/259 | [] | sam10001 | 2 | |
onnx/onnx | scikit-learn | 6,475 | source code question | Why not optimize this by determining if the index is already in the set?

| closed | 2024-10-20T08:50:58Z | 2024-11-01T14:54:16Z | https://github.com/onnx/onnx/issues/6475 | [
"question"
] | XiaBing992 | 1 |
SYSTRAN/faster-whisper | deep-learning | 649 | translate from English to Chinese failed | my code
`def stt():
try:
print("start stt")
# data:{file_path}
data = request.get_json()
filePath = data['file_path']
lang = data['lang']
whisper_instance = WhisperSingleton().model
defaultExpire = 3600 * 12
key = 'videoSrt.' + filePath
cc.RedisInst().setex(key, defaultExpire, json.dumps({'cur': 0}))
segments, info = whisper_instance.transcribe(filePath, vad_filter=True,language=lang,beam_size=5,
vad_parameters=dict(min_silence_duration_ms=500))
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
txtList = []
startTime = time.time()
for index, segment in enumerate(segments):
txtList.append(
{"index": index, "start": segment.start * 1000, "end": segment.end * 1000, "txt": segment.text})
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
cc.RedisInst().setex(key, defaultExpire, json.dumps({'cur': segment.end}))
print("[耗时:%d]" % (time.time() - startTime))
cc.RedisInst().setex(key, defaultExpire, json.dumps({'cur': 'ok'}))
print("return data:%s" % (jsonify({"code": "200", "data": txtList})))
return jsonify({"code": "200", "data": txtList})
except Exception as e:
return jsonify({"code": 100, "msg": str(e)})`
and filePath is an English mp4,but the result was still english txt:

Could anybody help me~
| open | 2024-01-19T09:15:02Z | 2024-01-19T17:19:28Z | https://github.com/SYSTRAN/faster-whisper/issues/649 | [] | hsiar | 2 |
PaddlePaddle/PaddleHub | nlp | 2,278 | paddle本地到底如何安装? | 想跑下stable_diffusion,本地linux服务器装了paddlepaddle 2.4.2GPU版和paddlehub2.3.1,要么就是JSONDecoderError, hub install stable_diffusion==1.0.0的时候要么就是找不到stable_diffusion要么就是can't find module pelease check your spelling;搞了一天没搞出来,网上教程都不管用
| closed | 2023-07-17T12:20:20Z | 2023-09-20T11:05:53Z | https://github.com/PaddlePaddle/PaddleHub/issues/2278 | [] | wzf19947 | 1 |
microsoft/unilm | nlp | 1,049 | RuntimeError: Unknown model (beit3_large_patch16_224) | **Describe**
Model I am using BEITV3:
I meet this issue:
RuntimeError: Unknown model (beit3_large_patch16_224)
timm has no this model, and i didnt find it.
<img width="949" alt="截屏2023-03-31 下午4 17 33" src="https://user-images.githubusercontent.com/13979105/229065534-1cdf35cc-e560-49f9-94d4-07a9fcf39b96.png">
<img width="771" alt="截屏2023-03-31 下午4 19 57" src="https://user-images.githubusercontent.com/13979105/229065924-f76e9bad-4c9c-47e7-a980-eb6b6785a723.png">
how should i solve it | closed | 2023-03-31T05:59:23Z | 2023-06-08T06:25:13Z | https://github.com/microsoft/unilm/issues/1049 | [] | lc222 | 12 |
encode/databases | asyncio | 340 | PostgreSQL: Transaction rollback errors with "another operation is in progress" | I've noticed that if one of my tests fails, every next test will fail with error from `async with database` claiming that connection is still on.
Upon closer look I've found out that first tests failure triggers transaction's rollback which sends rollback to the database.
However `asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress` will be raised in response to this, resulting in invalid state:
```
is_connected = True
_transaction_stack = []
```
This state can't be fixed other by monkeypatching connection object, as trying to call `disconnect()` will try to pop item from empty `_transaction_stack`, causing other exception.
Min repro:
```
database = Database(url="...", force_rollback=True)
async with database:
await raise_exception()
async with database: # this will crash because previous database connection was not cleaned up completely.
database.exec("...")
``` | open | 2021-05-16T15:25:19Z | 2021-05-16T15:59:57Z | https://github.com/encode/databases/issues/340 | [] | rafalp | 0 |
psf/black | python | 4,351 | Failure to format when double-quotes are in an f-string's code block | <!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
See the code example.
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
```python
test = {"asdf" : 23}
f"test['asdf']: {test["asdf"]}"
```
The resulting error is:
> [cannot format file.py: INTERNAL ERROR: ...](error: cannot format <base_path>/format_test.py: Cannot parse: 3:23: f"test['asdf']: {test["asdf"]}")
It fails at the first `"` inside the f-string's code block when indexing the dictionary.
As a note, the online formatter actually gives a different error, which makes even less sense:
> cannot use --safe with this file; failed to parse source file AST: f-string: unmatched '[' (<unknown>, line 3)
> This could be caused by running Black with an older Python version that does not support new syntax used in your source file.
**Expected behavior**
This is valid syntax. For example, in a Python REPL:
```
>>> test = {"asdf" : 23}
>>> type(test)
<class 'dict'>
>>> f"Testing['a']: {test["asdf"]}"
"Testing['a']: 23"
```
I expect no errors.
**Environment**
<!-- Please complete the following information: -->
- Black's version: `24.4.0`
- OS and Python version: Windows 11 and Python 3.12.1
**Additional context**
There is a workaround, and that is to replace the dictionary indexing double-quotes with single-quotes. Black is able to format this equivalent code:
```python
test = {"asdf" : 23}
f"test['asdf']: {test['asdf']}"
```
Also, regarding the online formatter, when I make this adjustment, it is also able to complete its formatting of this code. So it has an additional issue complaining about an unmatched `[`, which is obviously not the case if the switch from `"asdf"` to `'asdf'` fixes that error. | closed | 2024-05-07T16:04:24Z | 2024-05-28T20:00:12Z | https://github.com/psf/black/issues/4351 | [
"T: bug"
] | bmitc | 8 |
nikitastupin/clairvoyance | graphql | 3 | Utilise obtained names for probing | For example, we can break `maxAtmospheringSpeed` to `max`, `maxAtmosphering`, `AtmospheringSpeed`, `max`, `Atmosphering` and `Speed` names and use them for probing!
We can also add these to wordlist so they will be used in consequent probes 😃 | open | 2020-10-23T09:30:07Z | 2024-08-26T19:13:33Z | https://github.com/nikitastupin/clairvoyance/issues/3 | [
"enhancement"
] | nikitastupin | 1 |
dynaconf/dynaconf | fastapi | 1,163 | [RFC] Deprecate init and write commands on 4.0.0 | ### Discussed in https://github.com/dynaconf/dynaconf/discussions/1154
<div type='discussions-op-text'>
<sup>Originally posted by **rochacbruno** July 8, 2024</sup>
I suggest that on 4.0.0 we remove the following CLI commands
- init
- Users will need to write their schema manually so this command will not be very helpful
- write
- USers will need to use each source client to write, dynaconf will only read data from sources.
Changes to commands:
- list
- cleanup the tabular formatting, remove all coloring
Any comments? is there someone using those commands?</div>
closes https://github.com/dynaconf/dynaconf/issues/908 https://github.com/dynaconf/dynaconf/issues/936 | open | 2024-07-12T15:33:37Z | 2025-03-03T16:51:16Z | https://github.com/dynaconf/dynaconf/issues/1163 | [
"enhancement",
"Not a Bug",
"RFC",
"CLI",
"4.0-breaking-change"
] | rochacbruno | 4 |
flaskbb/flaskbb | flask | 537 | Ban/edit user in topic view | closed | 2019-10-17T19:25:36Z | 2020-12-04T17:56:08Z | https://github.com/flaskbb/flaskbb/issues/537 | [
"feature"
] | sh4nks | 1 | |
tox-dev/tox | automation | 3,054 | documentation build error for tox version 3 | ERROR: type should be string, got "https://readthedocs.org/projects/tox/builds/21135448/\r\n\r\n```\r\npython -m sphinx -T -E -W --keep-going -b html -d _build/doctrees -D language=en . $READTHEDOCS_OUTPUT/html\r\nRunning Sphinx v7.0.1\r\nmaking output directory... done\r\nWARNING: The pre-Sphinx 1.0 'intersphinx_mapping' format is deprecated and will be removed in Sphinx 8. Update to the current format as described in the documentation. Hint: \"intersphinx_mapping = {'<name>': ('https://docs.python.org/', None)}\".https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#confval-intersphinx_mapping\r\nloading intersphinx inventory from https://docs.python.org/objects.inv...\r\nintersphinx inventory has moved: https://docs.python.org/objects.inv -> https://docs.python.org/3/objects.inv\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nwriting output... \r\nbuilding [html]: targets for 22 source files that are out of date\r\nupdating environment: [new config] 22 added, 0 changed, 0 removed\r\nreading sources... [ 4%] _draft\r\nreading sources... [ 9%] announce/changelog-only\r\nreading sources... [ 13%] changelog\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/cmd/build.py\", line 285, in build_main\r\n app.build(args.force_all, args.filenames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/application.py\", line 351, in build\r\n self.builder.build_update()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/builders/__init__.py\", line 294, in build_update\r\n self.build(to_build,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/builders/__init__.py\", line 311, in build\r\n updated_docnames = set(self.read())\r\n ^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/builders/__init__.py\", line 418, in read\r\n self._read_serial(docnames)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/builders/__init__.py\", line 439, in _read_serial\r\n self.read_doc(docname)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/builders/__init__.py\", line 495, in read_doc\r\n publisher.publish()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/core.py\", line 234, in publish\r\n self.document = self.reader.read(self.source, self.parser,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/io.py\", line 104, in read\r\n self.parse()\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/readers/__init__.py\", line 76, in parse\r\n self.parser.parse(self.input, document)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/parsers.py\", line 80, in parse\r\n self.statemachine.run(inputlines, document, inliner=self.inliner)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 169, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 2785, in underline\r\n self.section(title, source, style, lineno - 1, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 325, in section\r\n self.new_subsection(title, lineno, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 391, in new_subsection\r\n newabsoffset = self.nested_parse(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 2785, in underline\r\n self.section(title, source, style, lineno - 1, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 325, in section\r\n self.new_subsection(title, lineno, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 391, in new_subsection\r\n newabsoffset = self.nested_parse(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 2785, in underline\r\n self.section(title, source, style, lineno - 1, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 325, in section\r\n self.new_subsection(title, lineno, messages)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 391, in new_subsection\r\n newabsoffset = self.nested_parse(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 1273, in bullet\r\n i, blank_finish = self.list_item(match.end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 1295, in list_item\r\n self.nested_parse(indented, input_offset=line_offset,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 279, in nested_parse\r\n state_machine.run(block, input_offset, memo=self.memo,\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 195, in run\r\n results = StateMachineWS.run(self, input_lines, input_offset)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 233, in run\r\n context, next_state, result = self.check_line(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/statemachine.py\", line 445, in check_line\r\n return method(match, context, next_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 2799, in text\r\n paragraph, literalnext = self.paragraph(lines, startline)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 416, in paragraph\r\n textnodes, messages = self.inline_text(text, lineno)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 425, in inline_text\r\n nodes, messages = self.inliner.parse(text, lineno,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 649, in parse\r\n before, inlines, remaining, sysmessages = method(self, match,\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 792, in interpreted_or_phrase_ref\r\n nodelist, messages = self.interpreted(rawsource, escaped, role,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/docutils/parsers/rst/states.py\", line 889, in interpreted\r\n nodes, messages2 = role_fn(role, rawsource, text, lineno, self)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/ext/extlinks.py\", line 103, in role\r\n title = caption % part\r\n ~~~~~~~~^~~~~~\r\nTypeError: not all arguments converted during string formatting\r\n\r\nException occurred:\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/tox/envs/3.28.0/lib/python3.11/site-packages/sphinx/ext/extlinks.py\", line 103, in role\r\n title = caption % part\r\n ~~~~~~~~^~~~~~\r\nTypeError: not all arguments converted during string formatting\r\nThe full traceback has been saved in /tmp/sphinx-err-_5j6vwfl.log, if you want to report the issue to the developers.\r\nPlease also report this if it was a user error, so that a better error message can be provided next time.\r\nA bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!\r\nCommand time: 1s Return: 2\r\n```" | closed | 2023-06-28T08:27:16Z | 2023-06-28T13:15:07Z | https://github.com/tox-dev/tox/issues/3054 | [] | jugmac00 | 1 |
sunscrapers/djoser | rest-api | 675 | Djoser django.core.mail.backends.smtp.EmailBackend' Error do not work.. |
When I try to send an activation mail when creating a User, I see an error concerning the smtp lib and I realize that it is related to DJoser and the django EMail BAckend messaging engine..
Error
Internal Server Error: /auth/users/
Traceback (most recent call last):
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/djoser/views.py", line 144, in perform_create
settings.EMAIL.activation(self.request, context).send(to)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/templated_mail/mail.py", line 78, in send
super(BaseEmailMessage, self).send(*args, **kwargs)
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/core/mail/message.py", line 298, in send
return self.get_connection(fail_silently).send_messages([self])
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/core/mail/backends/smtp.py", line 124, in send_messages
new_conn_created = self.open()
File "/home/princeg/Bureau/django_authentication_react/venv/lib/python3.9/site-packages/django/core/mail/backends/smtp.py", line 91, in open
self.connection.login(self.username, self.password)
File "/usr/lib/python3.9/smtplib.py", line 739, in login
(code, resp) = self.auth(
File "/usr/lib/python3.9/smtplib.py", line 641, in auth
response = encode_base64(initial_response.encode('ascii'), eol='')
AttributeError: 'tuple' object has no attribute 'encode'
[04/Jun/2022 14:26:59] "POST /auth/users/ HTTP/1.1" 500 126177
| open | 2022-06-04T14:33:02Z | 2022-06-04T14:33:02Z | https://github.com/sunscrapers/djoser/issues/675 | [] | princeGedeon | 0 |
ccxt/ccxt | api | 25,532 | Coinbase segfault (Go) | ### Operating System
macOS
### Programming Languages
Go
### CCXT Version
4.4.68
### Description
Checking for margin markets on Coinbase will cause a segfault.
### Code
```go
k := ccxt.NewCoinbase(nil)
<-k.LoadMarkets(nil)
fmt.Printf("ccxt version: %s\n", ccxt.Version)
m, _ := k.FetchMarkets(nil)
m0 := m[0]
if !*m0.Margin {
fmt.Println("test")
}
```
Result
```log
ccxt version: 4.4.68
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x10595d8c4]
``` | open | 2025-03-22T22:22:05Z | 2025-03-24T11:13:26Z | https://github.com/ccxt/ccxt/issues/25532 | [
"question"
] | RealAtix | 6 |
aiortc/aiortc | asyncio | 697 | peer connections keeps waiting | I am testing server.py example. it works fine at localhost. But peer connection keeps waiting when I move this to online. I am using ubuntu, nginx and using proxy so that my https://domain.com is sent to 127.0.0.1:8080. I have attached the response at the sever which shows status of peer connection. I could not find a way to add screenshot. It start with state.forzen, then state.waitng , state.waiting and then state.failed. IceFailed.
I am using ssl at server which is accepted by chrome. | closed | 2022-04-19T17:24:45Z | 2022-04-22T05:54:32Z | https://github.com/aiortc/aiortc/issues/697 | [
"invalid"
] | opusviz | 10 |
OFA-Sys/Chinese-CLIP | computer-vision | 186 | 支持开源的其他VIT预训练权重么 | hi,如题,我看model中的VIT是作者手动实现的,请问可以加载例如基于timm 的VIT训练的权重么。
谢谢~ | closed | 2023-08-14T09:35:10Z | 2023-08-15T11:37:52Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/186 | [] | liszekei | 1 |
google/seq2seq | tensorflow | 159 | Win10: tensorflow.contrib.tfprof has no attribute 'model_analyzer'. | On a Windows 10 system, Python 3.5.2, TF 1.0. Runs two tests, fails the last one with this message:
```
======================================================================
ERROR: test_train_infer (seq2seq.test.pipeline_test.PipelineTest)
Tests training and inference scripts.
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\work\Documents\seq2seq\seq2seq\test\pipeline_test.py", line 148, in test_train_infer
train_script.main([])
File "C:\Users\work\Documents\seq2seq\bin\train.py", line 271, in main
schedule=FLAGS.schedule)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 106, in run
return task()
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\experiment.py", line 459, in train_and_evaluate
self.train(delay_secs=0)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\experiment.py", line 281, in train
monitors=self._train_monitors + extra_hooks)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 280, in new_func
return func(*args, **kwargs)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 426, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 981, in _train_model
config=self.config.tf_config) as mon_sess:
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\monitored_session.py", line 315, in MonitoredTrainingSession
return MonitoredSession(session_creator=session_creator, hooks=all_hooks)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\monitored_session.py", line 601, in __init__
session_creator, hooks, should_recover=True)
File "C:\Users\work\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\monitored_session.py", line 428, in __init__
h.begin()
File "C:\Users\work\Documents\seq2seq\seq2seq\training\hooks.py", line 246, in begin
opts = tf.contrib.tfprof.model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS
AttributeError: module 'tensorflow.contrib.tfprof' has no attribute 'model_analyzer'
----------------------------------------------------------------------
Ran 2 tests in 5.876s
FAILED (errors=1)
```
From what I've seen in [other issues](https://github.com/tensorflow/tensorflow/issues/6791), tf.contrib support is not currently complete for Windows, so I'm guessing that's the problem. | closed | 2017-04-13T04:21:25Z | 2017-06-25T17:25:06Z | https://github.com/google/seq2seq/issues/159 | [] | PAK90 | 3 |
django-import-export/django-import-export | django | 1,110 | export data for parent foreign key field that reference to self not working | I have model with field **parent** ref to '**self**' :
e.g:
parent = models.ForeignKey('self',verbose_name="parent", on_delete=models.CASCADE, null=True, blank=True, related_name='children')
> when export data the field value is '_str_' for instance , not '_id_' for primary key !!!
**but if I change it to:**
parent = models.ForeignKey('app_lable.model_name',verbose_name="parent", on_delete=models.CASCADE, null=True, blank=True, related_name='children')
**It's work fine.** | closed | 2020-04-19T14:23:38Z | 2023-04-12T17:29:43Z | https://github.com/django-import-export/django-import-export/issues/1110 | [] | EngFarisAlsmawi | 3 |
dunossauro/fastapi-do-zero | pydantic | 168 | Exercícios resolvidos | - [x] 02
- [x] 03
- [x] 04
- [x] 05
- [x] 06
- [x] 08
- [x] 09 | closed | 2024-06-08T05:25:10Z | 2024-10-05T04:57:08Z | https://github.com/dunossauro/fastapi-do-zero/issues/168 | [] | dunossauro | 1 |
amdegroot/ssd.pytorch | computer-vision | 92 | How can I fine tune a SSD network ? | I want to use a pre-trained ResNet as base network. Then I add some additional layers on top of the specific layer of base network. How can I finetuning it? | open | 2018-01-10T14:02:30Z | 2018-02-21T06:26:15Z | https://github.com/amdegroot/ssd.pytorch/issues/92 | [] | woaichipinngguo | 4 |
iMerica/dj-rest-auth | rest-api | 118 | Password reset throws error with django 3.1 | Hello,
recently updated django to version 3.1 and the password reset view throws error:
django.urls.exceptions.NoReverseMatch: Reverse for 'password_reset_confirm' with keyword arguments '{'uidb64': 'MTA', 'token': 'a88hdr-5fb0f72acbe1e9d2b81b7810dee31037'}' not found. 1 pattern(s) tried: ['usermanagement\\/password-reset/confirm/(?P<uidb64>[0-9A-Za-z_\\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$']
If i roll back to django 3.0 it works again. | closed | 2020-08-07T10:16:36Z | 2020-12-29T10:57:18Z | https://github.com/iMerica/dj-rest-auth/issues/118 | [
"bug"
] | IIFelix | 14 |
rougier/numpy-100 | numpy | 215 | Exercise N62 - There is an unnecessary code, Exercise N80 - IndexError | **Exercise 62** - Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator?
**Code**
```python
a = np.random.randint(0, 10, (1, 3))
b = np.random.randint(0, 10, (3, 1))
it = np.nditer([a,b,None])
for x,y,z in it:
z = x + y
print(it.operands[2])
```
**Unnecessary part**
```python
for x,y,z in it:
z = x + y
```
After `it = np.nditer([a,b,None])` `it.operands[2]` already has that value.
**Exercise N80**
**Current code that gives error**
```python
r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]
R[r] = Z[z]
```
**Output**
```
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
```
**Solution** - Use tuples
```python
r = tuple(slice(start,stop) for start,stop in zip(R_start,R_stop))
z = tuple(slice(start,stop) for start,stop in zip(Z_start,Z_stop))
``` | open | 2024-09-27T12:54:55Z | 2024-10-21T13:57:58Z | https://github.com/rougier/numpy-100/issues/215 | [] | Artur-Arstamyan | 1 |
pallets-eco/flask-sqlalchemy | flask | 723 | Add a contributing guidelines document? | Hi there! 👋
Thanks for creating a great package - we use it for our [FEC API](https://github.com/fecgov/openFEC). In preparation for PyCon sprints, I was checking out your project and found that there isn't a contributing guide.
Here's the [GitHub info](https://help.github.com/en/articles/setting-guidelines-for-repository-contributors) on why contributing guidelines are helpful - I'd be happy to work on this issue if you'd like!
Thanks again,
Laura | closed | 2019-05-06T05:14:32Z | 2020-12-05T19:58:33Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/723 | [
"docs"
] | lbeaufort | 1 |
python-arq/arq | asyncio | 264 | Question: how to run a job that creates more jobs? | Is there a better way to do this?
worker.py:
```python
redis_settings = RedisSettings.from_dsn(config.REDIS_URL)
redis_pool = asyncio.run(create_pool(redis_settings))
async def startup(ctx):
ctx['redis'] = redis_pool
await setup()
async def shutdown(ctx):
pass
class WorkerSettings:
functions = [task]
on_startup = startup
on_shutdown = shutdown
redis_settings = redis_settings
```
task.py:
```python
async def task(ctx, collection_id):
"""Given a collection, create jobs to populate all of the assets. """
log, stats = setup_task(TaskName.POPULATE_COLLECTION)
ctx['redis'].enqueue_job(...)
```
| open | 2021-09-23T00:38:40Z | 2022-10-10T15:51:59Z | https://github.com/python-arq/arq/issues/264 | [] | omarish | 1 |
slackapi/python-slack-sdk | asyncio | 1,441 | Built-in InstallationStores fail to resolve a valid bot token when both bot and user-only installations co-exist in database tables | This happens in apps that implement both org-wide bot-scope installs together with individual user-scope installs.
Examples where an app would want to support this: (a) handling "[sign in with Slack](https://api.slack.com/authentication/sign-in-with-slack)" using Bolt and `/oauth_redirect` post-auth redirect url; or (b) offering extra user-level functionality to individuals on top of the org-wide more restricted bot-scope functionality, e.g. see this older discussion: [link](https://github.com/slackapi/bolt-python/issues/574#issuecomment-1023864177)
**Steps to reproduce:**
1. Have the app installed as a bot in slack workspace -> creates a row with bot token and bot scopes in the installation table, but no user tokens in the same row.
2. Get some user go through a user-token based oauth route (_not_ the install to workspace route) -> it creates a row with user token only.
3. Now, let's say the same user mentions the bot (from step 1) in Slack –> event is triggered and our Bolt program eventually gets into the method `SQLAlchemyInstallationStore.find_installation`
**Expected**: we are supposed to find the bot token to respond to the mention.
**What actually happens**. Error will be logged and the mention event will be ignored, here is why. This query:
https://github.com/slackapi/python-slack-sdk/blob/09fb1db018ebd55b9b20fc045cefea5f0d9008c1/slack_sdk/oauth/installation_store/sqlalchemy/__init__.py#L263
gets us the user token row from Step 2, not the one from Step 1. Why? B/c (a) it's the most recent `installed_at`, and (b) the argument `user_id` passed to the function is `None` (as happens when using Bolt - e.g. called [from here](https://github.com/slackapi/bolt-python/blob/5e63905ef3161dfe523a4415cc3d6aa807f31763/slack_bolt/authorization/async_authorize.py#L190)). So at this point we are missing the bot scopes we actually wanted (in order to get the bot respond to the reply).
You would think that the following part of the method is directly aimed at this scenario, i.e. it should go the extra step to retrieve the bot token, but _**since user_id is None, this block is never executed**_:
https://github.com/slackapi/python-slack-sdk/blob/09fb1db018ebd55b9b20fc045cefea5f0d9008c1/slack_sdk/oauth/installation_store/sqlalchemy/__init__.py#L297-L315
As a result, the error is [logged](https://github.com/slackapi/bolt-python/blob/5e63905ef3161dfe523a4415cc3d6aa807f31763/slack_bolt/middleware/authorization/async_multi_teams_authorization.py#L90) (if using the Bolt App):
> "Although the app should be installed into this workspace, the AuthorizeResult (returned value from authorize) for it was not found."
And thus the user does not get to see any response from the bot.
**Moreover**, this outcome depends on the order of the installation rows. If someone reinstalls the bot to the workspace, then the mention will now start working because row with the bot token will become the one with the most recent `installed_at` again. If the user then does user-token oauth again, the mention will stop working again. And so on – leading to unpredictable bot behaviour from the users' point of view.
| closed | 2023-12-02T10:50:57Z | 2023-12-04T09:10:30Z | https://github.com/slackapi/python-slack-sdk/issues/1441 | [
"bug",
"enhancement",
"Version: 3x",
"oauth"
] | kulmatitskiy | 2 |
ultralytics/ultralytics | deep-learning | 19,361 | export `engine` with `workspace` param in jetson will error | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
in `jetson jetpack 6.2`
```
MODEL_PATH = "/opt/yolo/models/yolov8l-obb-640-53.pt"
models = YOLO(MODEL_PATH, task="obb")
models.export(format="engine",
half=True,
imgsz=640,
device="0",
dynamic=True,
simplify=True,
nms=True,
batch=6,
workspace=3.5
)
````
will error
```
Ultralytics 8.3.50 🚀 Python-3.10.12 torch-2.5.0a0+872d972e41.nv24.08 CUDA:0 (Orin, 62841MiB)
YOLOv8l-obb summary (fused): 305 layers, 40,352,167 parameters, 0 gradients, 149.3 GFLOPs
PyTorch: starting from '/opt/yolo/models/yolov8l-obb-640-53.pt' with input shape (6, 3, 640, 640) BCHW and output shape(s) (6, 97, 8400) (77.6 MB)
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 33.9s, saved as '/opt/yolo/models/yolov8l-obb-640-53.onnx' (154.1 MB)
TensorRT: starting export with TensorRT 10.3.0...
[02/21/2025-18:32:23] [TRT] [I] [MemUsageChange] Init CUDA: CPU +1, GPU +0, now: CPU 1517, GPU 7824 (MiB)
[02/21/2025-18:32:24] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +928, GPU +743, now: CPU 2488, GPU 8543 (MiB)
[02/21/2025-18:32:24] [TRT] [I] ----------------------------------------------------------------
[02/21/2025-18:32:24] [TRT] [I] Input filename: /opt/yolo/models/yolov8l-obb-640-53.onnx
[02/21/2025-18:32:24] [TRT] [I] ONNX IR version: 0.0.9
[02/21/2025-18:32:24] [TRT] [I] Opset version: 19
[02/21/2025-18:32:24] [TRT] [I] Producer name: pytorch
[02/21/2025-18:32:24] [TRT] [I] Producer version: 2.5.0
[02/21/2025-18:32:24] [TRT] [I] Domain:
[02/21/2025-18:32:24] [TRT] [I] Model version: 0
[02/21/2025-18:32:24] [TRT] [I] Doc string:
[02/21/2025-18:32:24] [TRT] [I] ----------------------------------------------------------------
TensorRT: input "images" with shape(-1, 3, -1, -1) DataType.FLOAT
TensorRT: output "output0" with shape(-1, 97, -1) DataType.FLOAT
TensorRT: building FP16 engine as /opt/yolo/models/yolov8l-obb-640-53.engine
[02/21/2025-18:32:25] [TRT] [E] [dims32.cpp::toDims32::45] Error Code 3: API Usage Error (Dimension 2 has value 2405181685760 which exceeds range of int32_t)
[02/21/2025-18:32:25] [TRT] [E] IBuilder::buildSerializedNetwork: Error Code 4: API Usage Error (DLA validation failed)
TensorRT: export failure ❌ 36.1s: __enter__
Traceback (most recent call last):
File "/opt/yolo/test/yolobox_speed/predwzf.py", line 18, in <module>
models.export(format="engine",
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 738, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 359, in __call__
f[1], _ = self.export_engine(dla=dla)
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 146, in outer_func
raise e
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 141, in outer_func
f, model = inner_func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 898, in export_engine
with build(network, config) as engine, open(f, "wb") as t:
AttributeError: __enter__
```
comment out the `workspace` parameter the program will ok
### Environment
same as the above log
### Minimal Reproducible Example
same as the above
### Additional
no
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-02-21T10:47:04Z | 2025-03-06T14:28:02Z | https://github.com/ultralytics/ultralytics/issues/19361 | [
"bug",
"fixed",
"OBB",
"exports"
] | jaffe-fly | 2 |
Yorko/mlcourse.ai | matplotlib | 749 | Fix MyST header anchors | Check auto-generated anchors, e.g.:
`myst-anchors -l 3 mlcourse_ai_jupyter_book/book/topic02/topic02_visual_data_analysis.md`
[MyST docs](https://myst-parser.readthedocs.io/en/latest/syntax/optional.html#auto-generated-header-anchors)
| closed | 2023-05-17T14:22:11Z | 2024-08-19T16:42:26Z | https://github.com/Yorko/mlcourse.ai/issues/749 | [] | Yorko | 2 |
matterport/Mask_RCNN | tensorflow | 2,762 | different outputs for keras.layers.TimeDistributed() | moving from
`Ubuntu 18.04 | CUDA 10.1 | tensorflow 2.1.0 | keras 2.3`
to
`Ubuntu 20.04 | CUDA 11.1 | tensorflow 2.4.0 | keras 2.4.0`
I'm getting a different output in
x = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (pool_size, pool_size), padding='valid'), name='mrcnn_class_conv1')(x)
namely, in the first setup:
x = Tensor("mrcnn_class_conv1/Reshape_1:0", shape=(None, 1000, 1, 1, 1024), dtype=float32)
and in the second setup:
x = KerasTensor(type_spec=TensorSpec(shape=(1, None, 1, 1, 1024), dtype=tf.float32, name=None), name='conv2d/squeeze_batch_dims/Reshape_1:0', description="created by layer 'conv2d'")
tensor shapes are different somehow: `(1, None, 1, 1, 1024)` instead of `(None, 1000, 1, 1, 1024)`.
Does anybody have an idea about reasons and possible fixes? Does this migration make sense?
Thanks in advance for any advice! | open | 2022-01-19T13:29:37Z | 2022-01-19T13:29:37Z | https://github.com/matterport/Mask_RCNN/issues/2762 | [] | aabramovrepo | 0 |
quantumlib/Cirq | api | 6,538 | Inverting gates does not allow to save circuit in json | **Description of the issue**
**How to reproduce the issue**
```
U = cirq.Circuit()
K = cirq.Circuit()
K.append(cirq.PhasedFSimGate(0, 0, 0, 0,0).on(cirq.GridQubit(0, 0), cirq.GridQubit(0, 1)))
U.append(K)
U.append(cirq.X.on(cirq.GridQubit(0, 0)))
U.append(cirq.inverse(K))
with open('test.json', 'wb') as f:
f.write(cirq.to_json(U).encode('utf-8'))
f.close()
U = cirq.read_json(open('test.json', 'rb'))
```
---> the following code will fail with:
```
ValueError: Could not resolve type '_InverseCompositeGate' during deserialization
```
**Cirq version**
```
1.2.0
```
| open | 2024-04-01T03:17:06Z | 2024-11-13T01:16:17Z | https://github.com/quantumlib/Cirq/issues/6538 | [
"kind/bug-report",
"triage/accepted",
"area/serialization",
"area/json"
] | nikita-astronaut | 3 |
Yorko/mlcourse.ai | numpy | 758 | Proofread topic 7 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:55Z | 2024-08-25T08:10:28Z | https://github.com/Yorko/mlcourse.ai/issues/758 | [
"enhancement",
"articles"
] | Yorko | 2 |
onnx/onnx | pytorch | 5,920 | Clarify DFT behavior when `inverse=True` | Currently, https://onnx.ai/onnx/operators/onnx__DFT.html specifies the input/output relations for DFT, but it does not specify what those when `inverse=True`. This can create confusion on, for example, whether both `onesided` and `inverse` can be set, and what the input/output shapes should be.
cc @xadupre @gramalingam | open | 2024-02-08T17:56:53Z | 2025-01-24T15:02:17Z | https://github.com/onnx/onnx/issues/5920 | [
"topic: spec clarification"
] | justinchuby | 3 |
geex-arts/django-jet | django | 233 | Custom Dashboard shows empty Modules | Hi,
if you setup a custom dashboard with some modules, e. g. AppList, ModelList, LinkedList, ..., the module instantiating with the argument `models` works only with the asterisk like this: `models=('appname.*',)`. In case of explicit given model names like `models=('appname.the_model',)`, the dashboard module remains empty. No models appear in the module.
**The code snippet:**
```
self.children.append(
modules.AppList(
title=_('User Management'),
column=1,
models=('auth.user', 'auth.group',),
),
)
```
**The result on the dashboard:**
<img width="362" alt="applist" src="https://user-images.githubusercontent.com/2959990/28228448-e54557fc-68de-11e7-99bb-880d228b5fba.png">
| open | 2017-07-14T19:58:54Z | 2017-08-15T15:42:01Z | https://github.com/geex-arts/django-jet/issues/233 | [] | ghost | 7 |
pyppeteer/pyppeteer | automation | 137 | Evaluate stuck after some executions in headless mode | if I call this code 100-200 times it just stuck and still awaiting forever:
`
await self.page.evaluate('''() => {
console.log('LOG');
return 'test string';
}''')
`
Any Ideas? This goes only with headless: True option on all chrome and chromium builds under macOS and Linux, but on NodeJS puppeteer same code works perfect. | closed | 2020-06-16T06:49:22Z | 2021-06-11T10:44:35Z | https://github.com/pyppeteer/pyppeteer/issues/137 | [
"bug",
"fixed-in-2.1.1",
"can't reproduce"
] | alternativshik | 37 |
google-research/bert | tensorflow | 1,139 | Is this BERT architecture will be suitable for log file anomly detection?? | I'm doing log files analysis where I have different server logs and I want to detect anomaly from these logs. Already it is implemented in LSTM but this implementation is not as much nice. I have done research a little bit on BERT it looks great but I just want to know more detail about this task. | open | 2020-08-19T06:52:09Z | 2020-08-19T06:53:33Z | https://github.com/google-research/bert/issues/1139 | [] | zawster | 0 |
huggingface/transformers | tensorflow | 36,472 | Dtensor support requires torch>=2.5.1 | ### System Info
torch==2.4.1
transformers@main
### Who can help?
#36335 introduced an import on Dtensor https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L44
but this doesn't exist on torch==2.4.1, but their is no guard around this import and setup.py lists torch>=2.0.
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
install torch==2.4.1
install transformers@main
attempt to load any prretained model
see axolotl ci https://github.com/axolotl-ai-cloud/axolotl/actions/runs/13578637245/job/37960393969
### Expected behavior
regular functionality so import from transformers doesn't fail | closed | 2025-02-28T05:02:22Z | 2025-03-05T10:27:02Z | https://github.com/huggingface/transformers/issues/36472 | [
"bug"
] | winglian | 6 |
wemake-services/wemake-django-template | pytest | 2,160 | Upgrade to 4.2 | Django 4.2 is out: https://github.com/wemake-services/wemake-django-template/pull/2157
We need to upgrade our template, when `django-stubs` will be ready. | closed | 2023-04-04T07:04:27Z | 2023-05-03T08:49:57Z | https://github.com/wemake-services/wemake-django-template/issues/2160 | [] | sobolevn | 0 |
cobrateam/splinter | automation | 1,072 | Error in Docker Container, "unknown error: DevToolsActivePort file doesn't exist" | I'm trying to run a program in a Docker container using Splinter. Here is my Dockerfile
```
FROM python:3.7-alpine
COPY requirements.txt .
RUN apk update && \
apk add make automake gcc g++ subversion python3-dev && \
apk add gcc musl-dev python3-dev libffi-dev openssl-dev && \
apk add chromium chromium-chromedriver && \
pip install -r /requirements.txt && \
rm -rf /root/.[acpw]* ipaexg00301*
COPY . /app
```
After creating the container and running my code in it I get the following error:
Traceback (most recent call last):
File "/app/youversion_ingress.py", line 387, in <module>
main()
File "/app/youversion_ingress.py", line 375, in main
books_df, grouped_df = books_chapter_count(exploded, max_chapters)
File "/app/youversion_ingress.py", line 224, in books_chapter_count
browser = Browser('chrome')
File "/usr/local/lib/python3.6/site-packages/splinter/browser.py", line 101, in Browser
return get_driver(driver, retry_count=retry_count, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/splinter/browser.py", line 76, in get_driver
raise err
File "/usr/local/lib/python3.6/site-packages/splinter/browser.py", line 72, in get_driver
return driver(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/splinter/driver/webdriver/chrome.py", line 43, in __init__
self.driver = Chrome(options=options, **kwargs)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/lib/chromium/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Running my script locally on my machine works fine so it must be something within Docker container. Input and comments would be appreciated. | closed | 2022-07-22T13:04:18Z | 2024-02-13T15:04:11Z | https://github.com/cobrateam/splinter/issues/1072 | [] | oliverdixon85 | 2 |
iperov/DeepFaceLab | deep-learning | 5,679 | Who knows what kind of artifacts climbed out? | Who knows what kind of artifacts climbed out?


| open | 2023-06-06T09:37:55Z | 2023-06-08T14:30:55Z | https://github.com/iperov/DeepFaceLab/issues/5679 | [] | pisket | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 937 | Can't use user profiles | Hi all, I am trying to run a user profile that has a custom extension installed. However, when I run the WebDriver, it does not open with that profile and the extension is nowhere to be found. The entire of purpose of this is to use authenticated proxies, as every other method that I have tried (including loading an unpacked extension) doesn't work either as the extension never loads. I have made sure to close this profile before trying it.
My code is:
```
import undetected_chromedriver.v2 as uc
chrome_options = uc.ChromeOptions()
chrome_options.add_argument('--user-data-dir=C:\\Users\\Myname\\AppData\\Local\\Google\\Chrome\\User Data\\')
chrome_options.add_argument('--profile-directory=Profile 6')
driver = uc.Chrome(chrome_options=chrome_options)
driver.get('https://httpbin.org/ip')
```
However my profile never loads. I'm not really sure what else I can do, and I also tried concatenating on Profile 6 to the user data dir string and throwing away the --profile-directory tag to no avail. Any help or additional suggestions would be appreciated. I am able to get this to work in the default selenium webdriver, but not with the undetected one. | open | 2022-12-06T01:12:34Z | 2022-12-09T02:00:11Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/937 | [] | joshglens | 4 |
hpcaitech/ColossalAI | deep-learning | 5,600 | [BUG] [Shardformer]: Error in blip2 testing with half precision | ### 🐛 Describe the bug
1. It seems blip2 testing doesn't work correctly at all if model is half precision (torch.float16).
2. With bfloat16, `colossalai.shardformer.layer.FusedLayerNorm` doesn't seem to work correctly.
https://github.com/hpcaitech/ColossalAI/blob/main/tests/test_shardformer/test_model/test_shard_blip2.py
This test file passes as it is.
But if I change `dtype` to `torch.float16`:
https://github.com/hpcaitech/ColossalAI/blob/89049b0d899477a3b31f02b31fde1a839e31c6fc/tests/test_shardformer/test_model/test_shard_blip2.py#L92
It fails:
```bash
E File "test_shard_blip2.py", line 28, in check_forward_backward
E assert_hf_output_close(org_output, shard_output, ignore_keys=["past_key_values"])
E File "colossalai/testing/comparison.py", line 125, in assert_hf_output_close
E assert_hf_output_close(
E File "colossalai/testing/comparison.py", line 149, in assert_hf_output_close
E assert_close(
E File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1520, in assert_close
E raise error_metas[0].to_error(msg)
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 5947392 / 5947392 (100.0%)
E Greatest absolute difference: nan at index (0, 0) (up to 1e-06 allowed)
E Greatest relative difference: nan at index (0, 0) (up to 1e-05 allowed)
```
With `dtype=torch.bfloat16` and without `enable_fused_normalization` it passes, but if I enable `enable_fused_normalization`, it fails again:
```bash
E File "test_shard_blip2.py", line 28, in check_forward_backward
E assert_hf_output_close(org_output, shard_output, ignore_keys=["past_key_values"])
E File "/colossalai/testing/comparison.py", line 125, in assert_hf_output_close
E assert_hf_output_close(
E File "/colossalai/testing/comparison.py", line 125, in assert_hf_output_close
E assert_hf_output_close(
E File "/colossalai/testing/comparison.py", line 149, in assert_hf_output_close
E assert_close(
E File "/opt/conda/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1520, in assert_close
E raise error_metas[0].to_error(msg)
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 24271 / 2161696 (1.1%)
E Greatest absolute difference: 0.0078125 at index (0, 3, 47) (up to 1e-05 allowed)
E Greatest relative difference: 169.0 at index (0, 3, 47325) (up to 1e-05 allowed)
```
### Environment
torch 2.2.1 / CUDA 12.1
colossalai 0.3.6
transformesr 4.36.0 | open | 2024-04-15T20:59:12Z | 2024-04-24T08:57:39Z | https://github.com/hpcaitech/ColossalAI/issues/5600 | [
"bug"
] | insujang | 1 |
ultralytics/yolov5 | machine-learning | 13,164 | multi-gpu validation | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
Is there any way to use multi-gpu during running val.py of Yolov5?
I set --device parameter as 0,1,2,3, but it doesn't work
### Additional
_No response_ | closed | 2024-07-04T05:17:12Z | 2024-08-15T00:22:57Z | https://github.com/ultralytics/yolov5/issues/13164 | [
"question",
"Stale"
] | yjseok | 2 |
Miserlou/Zappa | django | 1,871 | bug: Error condition check for binary support | https://github.com/Miserlou/Zappa/blob/3ccf7490a8d8b8fa74a61ee39bf44234f3567739/zappa/handler.py#L542-L551
The condition `not response.mimetype.startswith("text/") or response.mimetype != "application/json"` will never happen. | open | 2019-05-11T17:45:05Z | 2019-05-11T17:45:05Z | https://github.com/Miserlou/Zappa/issues/1871 | [] | lonsdale8734 | 0 |
jupyter/nbgrader | jupyter | 1,340 | Help with the error, The exchange directory at /srv/nbgrader/exchange does not exist and could not be created. The "release" and "collect" functionality will not be available. | The documentation is somewhat vague as to how to fix this error. Could someone please provide a more direct, step-by-step approach to solving this error. Some of us would like to bring Jupyter notebooks into our curriculum but are very new to it. | open | 2020-06-10T22:13:53Z | 2020-07-29T16:25:41Z | https://github.com/jupyter/nbgrader/issues/1340 | [] | stcline | 4 |
marcomusy/vedo | numpy | 610 | Curvature range | I am playing around with the vedo library a bit and find it super helpful to work with surfaces. In particular, I turn point clouds into surfaces with `recoSurface()` and then measure curvatures with the `addcurvatureScalar()` function and the Gaussian curvature method.
The visualized surface looks like this:

According to the definition of the curvature method, the algorithm takes into account the edges connecting an individual vertex with its neighbours. I was wondering if it was possible to increase the range within which points will be considered as neighbours and thus calculate the curvature in a bit more global fashion? Because for a sufficiently large amount of points, any surface will appear locally flat (as in the screenshot), but I'm more interested in the larger-scale curvatures.
Do you have a suggestion on how to do this? Any tips are greatly appreciated :)
| closed | 2022-03-04T14:04:29Z | 2022-03-07T23:40:03Z | https://github.com/marcomusy/vedo/issues/610 | [] | jo-mueller | 8 |
man-group/arctic | pandas | 407 | Randomly raise Exception: Error decompressing if append many times to a symbol in chunk store | #### Arctic Version
```
arctic (1.51.0)
```
#### Arctic Store
```
ChunkStore
```
#### Platform and version
Red Hat Enterprise Linux Server release 7.2 (Maipo)
#### Description of problem and/or code sample that reproduces the issue
I append daily data to one symbol (write if not exists and set chunk_size = 'A').
The data looks like this:
- columns of the DataFrame
```
Index(['beta', 'btop', 'earnyild', 'growth', 'industry', 'leverage',
'liquidty', 'momentum', 'resvol', 'sid', 'size', 'sizenl'],
dtype='object')
```
- head of the DataFrame (part)
```
beta btop earnyild growth industry leverage liquidty
date
2008-12-25 0.200 -0.386 -0.669 -0.432 23 -0.307 0.746
2008-12-25 0.653 0.048 0.671 0.182 10 0.255 1.097
2008-12-25 -1.726 -1.105 -1.042 -2.661 22 -0.732 -3.400
2008-12-25 -0.407 2.840 2.588 -1.505 19 -0.454 -1.137
2008-12-25 0.931 1.302 -0.946 -0.306 31 3.042 -0.429
```
- the dtypes
```
beta float64
btop float64
earnyild float64
growth float64
industry int64
leverage float64
liquidty float64
momentum float64
resvol float64
sid int64
size float64
sizenl float64
```
it will randomly raise the following exception (2008-12-25 for example)
```
[2017-08-29 14:17:00] [factor.value] [INFO] update 2008-12-25 barra exposures failed:Error decompressing
[2017-08-29 14:17:00] [factor.value] [ERROR] Traceback (most recent call last):
File "/home/quant/newalpha/warden/warden/_update_factors.py", line 88, in _update_barra_exposures
n = update_lib(lib_factors, 'barra_exposures', exposures)
File "/home/quant/newalpha/warden/warden/utils.py", line 70, in update_lib
lib.append(symbol, data_to_append, metadata=meta)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/chunkstore/chunkstore.py", line 503, in append
self.__update(sym, item, metadata=metadata, combine_method=SER_MAP[sym[SERIALIZER]].combine, audit=audit)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/chunkstore/chunkstore.py", line 415, in __update
df = self.read(symbol, chunk_range=chunker.to_range(start, end), filter_data=False)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/chunkstore/chunkstore.py", line 268, in read
data = SER_MAP[sym[SERIALIZER]].deserialize(chunks, **kwargs)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/serialization/numpy_arrays.py", line 195, in deserialize
df = pd.concat([self.converter.objify(d, columns) for d in data], ignore_index=not index)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/serialization/numpy_arrays.py", line 195, in <listcomp>
df = pd.concat([self.converter.objify(d, columns) for d in data], ignore_index=not index)
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/serialization/numpy_arrays.py", line 126, in objify
d = decompress(doc[DATA][doc[METADATA][LENGTHS][col][0]: doc[METADATA][LENGTHS][col][1] + 1])
File "/opt/anaconda3/lib/python3.5/site-packages/arctic/_compression.py", line 55, in decompress
return clz4.decompress(_str)
File "_compress.pyx", line 121, in _compress.decompress (src/_compress.c:2151)
Exception: Error decompressing
```
It seems that the data was broken and can not be decompressed (for any date range). If I delete the document related to 2008, it can be decompressed again.
Thx a lot!
| closed | 2017-08-29T07:14:20Z | 2017-10-17T20:23:35Z | https://github.com/man-group/arctic/issues/407 | [] | lf-shaw | 14 |
InstaPy/InstaPy | automation | 6,526 | Can I specify a few types of media? | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
When i want like some posts of people from list, sometimes I meet situation where most part of content is video and when i write like this:
`session.set_do_like(enabled=True, percentage=100)`
`session.set_user_interact(amount=2, percentage=100, randomize=False, media='Photo')`
`session.follow_by_list(followlist=list_to_follow, times=2, sleep_delay=10, interact=True)`
long time Instapy try to find photos
## Can I
Can I write like this and Instapy will stop on a first intersection with specified type of media:
`session.set_do_like(enabled=True, percentage=100)`
`session.set_user_interact(amount=2, percentage=100, randomize=False, media='Photo, Video')`
`session.follow_by_list(followlist=list_to_follow, times=2, sleep_delay=10, interact=True)`
or I should change randomize to True or you can give me another way. Thanks
| open | 2022-03-01T00:26:24Z | 2022-03-01T00:27:28Z | https://github.com/InstaPy/InstaPy/issues/6526 | [] | Ilnur786 | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,373 | How to get result images have same names suffix as testA? | Hi,
Firstly thank you very much for such a nice implementation. My question is related to CycleGan:
1. I have 500 files in /testA/ named such as 1.jpg, 2.jpg, ... , 500.jpg
2. I want to have the results to have outputs say 1_real.png , 1_fake.png, ..., 500_real.png, 500_fake.png - that refers to the original 1.jpg, .... , 500.jpg (in sequence).
Right now it comes as randomized even when number of test images is changed to 500 instead of the default.
Any tips on how to achieve this? I think it should be a simple change in some lines.
Thanks.
PS. As an aside (though not important), can the jpg format preserved and not changed to png? | open | 2022-01-30T21:35:15Z | 2022-02-15T20:23:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1373 | [] | adnan0819 | 3 |
tqdm/tqdm | jupyter | 678 | Colors do not work if tqdm imported. | This code works...
```
_CRED = '\033[91m'
_CEND = '\033[0m'
print(_CRED + "Colors bre being set" + _CEND + '-all_set')
```
but after importing the same code does not work.
```
import tqdm
_CRED = '\033[91m'
_CEND = '\033[0m'
print(_CRED + "Colors bre being set" + _CEND + '-all_set')
```
Python:
(base) C:\Users\212574384>python --version
Python 3.7.0
OS: win 10
 | closed | 2019-02-19T21:45:11Z | 2020-10-10T23:51:15Z | https://github.com/tqdm/tqdm/issues/678 | [
"invalid ⛔",
"to-fix ⌛",
"p2-bug-warning ⚠"
] | krishnachouhan | 13 |
sinaptik-ai/pandas-ai | data-visualization | 1,168 | Support Chinese characters in prompt generation stage | ### System Info
pandasai == 2.0.43
python == 3.11
### 🐛 Describe the bug
I was trying to use *Field Descriptions* feature to improve the understanding of my dataset to LLMs. The way I am doing is write a data description function to create a dictionary info about dataset then pass then to pandasai through *Field Descriptions* like this:
```
data = preview_data(df)
# define a connector
connector = PandasConnector({"original_df": df}, name='My Connector', field_descriptions=data)
```
My part of `data` looks like this:
```
{'时间': 'The 时间 column contains string values. The unique values are: 2023-6-14, 2022-4-22, 2022-11-5.'}
```
As you can see there is some Chinese characters, but in the prompt_generation stage, the Chinese characters was not decoded thus it looks like this:
```
dfs[0]:
name: My Connector
description: null
type: pd.DataFrame
rows: 28
columns: 18
schema:
fields:
- name: "\u65F6\u95F4"
type: object
samples:
- 2022-4-22
- 2022-11-5
- 2023-6-14
```
Which makes LLM much more confused "\u65F6\u95F4".
Is any way we solve this problem? Any suggestion will be grateful! | closed | 2024-05-20T02:58:26Z | 2024-10-16T08:32:27Z | https://github.com/sinaptik-ai/pandas-ai/issues/1168 | [
"bug"
] | Tu-Zhenzhao | 1 |
slackapi/python-slack-sdk | asyncio | 1,611 | Add `expand` attribute to SectionBlock | The [API docs list the attribute `expand`](https://api.slack.com/reference/block-kit/blocks#section), which is currently missing from the SDK version in SectionBlock class.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [X] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-12-06T07:33:34Z | 2025-01-14T14:53:39Z | https://github.com/slackapi/python-slack-sdk/issues/1611 | [
"enhancement",
"good first issue"
] | henrinormak | 2 |
mwaskom/seaborn | data-science | 3,227 | seaborn objects scale with two visualisations with same kwargs? | Hello,
I ran into a problem with scale, when I'm trying to display two visualizations with color mapped by column.
I'm trying to create bar plot with labels on bars. Position of labels and color of labels depends on column of dataframe. Also, I would like to color bars by column.
here is my question on stack overflow: https://stackoverflow.com/questions/75161245/how-to-use-seaborn-objects-scale-with-two-visualisations-with-same-kwargs
Is there a way to do this?
Thank you for answer | closed | 2023-01-19T11:30:20Z | 2023-01-20T00:30:28Z | https://github.com/mwaskom/seaborn/issues/3227 | [] | vorel99 | 1 |
cvat-ai/cvat | tensorflow | 8,627 | GT annotations can sometimes show up in the Standard mode | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
In a task with honeypots, GT annotations can show up in the UI, when an annotation job is just opened. It can happen on the first opening of the job or later. If a mode is switched, e.g. to Review, the GT annotations correctly disappear, until the show conflicts button is pressed.

### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
- bce96eaafd0dc1dab6d59044aba6e15f7ca3163e
| closed | 2024-10-31T15:03:55Z | 2024-11-13T14:20:47Z | https://github.com/cvat-ai/cvat/issues/8627 | [
"bug",
"ui/ux"
] | zhiltsov-max | 0 |
jmcarpenter2/swifter | pandas | 132 | TypeError Thrown for Geospatial Calculations | When calling
`df[geom_col].swifter.apply(lambda geom: geom.minimum_rotated_rectangle)`
A TypeError is thrown. `TypeError: apply() takes from 2 to 3 positional arguments but 4 were given`
If its something simpler, like getting the area, no error is thrown
`df[geom_col].swifter.apply(lambda geom: geom.area)`
Swifter version 0.305, Pandas version 1.1.3, dask version 2.22, geopandas version 0.8.1
Full error:
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-72fa7e94ac35> in <module>
----> 1 df[geom_col].swifter.apply(lambda geom: geom.minimum_rotated_rectangle)
C:\ProgramData\Anaconda3\lib\site-packages\swifter\swifter.py in apply(self, func, convert_dtype, args, **kwds)
263 # if pandas sample apply takes too long and not performing str processing, use dask
264 if (est_apply_duration > self._dask_threshold) and allow_dask_processing:
--> 265 return self._dask_apply(func, convert_dtype, *args, **kwds)
266 else: # use pandas
267 if self._progress_bar:
C:\ProgramData\Anaconda3\lib\site-packages\swifter\swifter.py in _dask_apply(self, func, convert_dtype, *args, **kwds)
226 dd.from_pandas(self._obj, npartitions=self._npartitions)
227 .apply(lambda x: func(x, *args, **kwds), convert_dtype=convert_dtype, meta=meta)
--> 228 .compute(scheduler=self._scheduler)
229 )
230
C:\ProgramData\Anaconda3\lib\site-packages\dask\base.py in compute(self, **kwargs)
165 dask.base.compute
166 """
--> 167 (result,) = compute(self, traverse=False, **kwargs)
168 return result
169
C:\ProgramData\Anaconda3\lib\site-packages\dask\base.py in compute(*args, **kwargs)
445 postcomputes.append(x.__dask_postcompute__())
446
--> 447 results = schedule(dsk, keys, **kwargs)
448 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
449
C:\ProgramData\Anaconda3\lib\site-packages\dask\multiprocessing.py in get(dsk, keys, num_workers, func_loads, func_dumps, optimize_graph, pool, **kwargs)
216 pack_exception=pack_exception,
217 raise_exception=reraise,
--> 218 **kwargs
219 )
220 finally:
C:\ProgramData\Anaconda3\lib\site-packages\dask\local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs)
484 _execute_task(task, data) # Re-execute locally
485 else:
--> 486 raise_exception(exc, tb)
487 res, worker_id = loads(res_info)
488 state["cache"][key] = res
C:\ProgramData\Anaconda3\lib\site-packages\dask\local.py in reraise(exc, tb)
314 if exc.__traceback__ is not tb:
315 raise exc.with_traceback(tb)
--> 316 raise exc
317
318
C:\ProgramData\Anaconda3\lib\site-packages\dask\local.py in execute_task()
220 try:
221 task, data = loads(task_info)
--> 222 result = _execute_task(task, data)
223 id = get_id()
224 result = dumps((result, id))
C:\ProgramData\Anaconda3\lib\site-packages\dask\core.py in _execute_task()
119 # temporaries by their reference count and can execute certain
120 # operations in-place.
--> 121 return func(*(_execute_task(a, cache) for a in args))
122 elif not ishashable(arg):
123 return arg
C:\ProgramData\Anaconda3\lib\site-packages\dask\optimization.py in __call__()
1020 if not len(args) == len(self.inkeys):
1021 raise ValueError("Expected %d args, got %d" % (len(self.inkeys), len(args)))
-> 1022 return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))
1023
1024 def __reduce__(self):
C:\ProgramData\Anaconda3\lib\site-packages\dask\core.py in get()
149 for key in toposort(dsk):
150 task = dsk[key]
--> 151 result = _execute_task(task, cache)
152 cache[key] = result
153 result = _execute_task(out, cache)
C:\ProgramData\Anaconda3\lib\site-packages\dask\core.py in _execute_task()
119 # temporaries by their reference count and can execute certain
120 # operations in-place.
--> 121 return func(*(_execute_task(a, cache) for a in args))
122 elif not ishashable(arg):
123 return arg
C:\ProgramData\Anaconda3\lib\site-packages\dask\utils.py in apply()
29 def apply(func, args, kwargs=None):
30 if kwargs:
---> 31 return func(*args, **kwargs)
32 else:
33 return func(*args)
C:\ProgramData\Anaconda3\lib\site-packages\dask\dataframe\core.py in apply_and_enforce()
5256 func = kwargs.pop("_func")
5257 meta = kwargs.pop("_meta")
-> 5258 df = func(*args, **kwargs)
5259 if is_dataframe_like(df) or is_series_like(df) or is_index_like(df):
5260 if not len(df):
C:\ProgramData\Anaconda3\lib\site-packages\dask\utils.py in __call__()
893
894 def __call__(self, obj, *args, **kwargs):
--> 895 return getattr(obj, self.method)(*args, **kwargs)
896
897 def __reduce__(self):
TypeError: apply() takes from 2 to 3 positional arguments but 4 were given` | closed | 2020-10-08T19:57:53Z | 2020-10-12T03:32:50Z | https://github.com/jmcarpenter2/swifter/issues/132 | [] | yesthisistom | 4 |
lucidrains/vit-pytorch | computer-vision | 292 | Question regarding 1d fft use | Hi, thanks so much for your excellent work! I just have one question regarding your last commit of simple_vit_with_fft:
https://github.com/lucidrains/vit-pytorch/blob/d446a41243c91a43adcac6f0559d53f1a4eea4fa/vit_pytorch/simple_vit_with_fft.py#L131
In this commit you use a 1D fft; may I ask why not use torch.fft.fft2()? Since you are applying fft to img with size of (B,C,H,W) not (B,N,C). Thanks! | closed | 2023-12-23T07:47:55Z | 2023-12-23T16:12:14Z | https://github.com/lucidrains/vit-pytorch/issues/292 | [] | chengengliu | 1 |
thtrieu/darkflow | tensorflow | 969 | Conversion to ONNX (Yolo on HoloLens) | Hey there,
I'm currently trying to convert my saved (and working) .pb file into a onnx file. I'm using winML tools at the moment but the result of the conversion doesn't work at all (the input parameters are wrong + the whole architecture is not correct). I want to use the converted model inside a UWP application that's running on a HoloLens.
I put the following properties as input/output:
Input Vector: input:0
Output Vector: output:0
Has anyone already successfully converted this model (or any TensorFlow model) to ONNX? If so, I'd be very thankful for any link/tutorial/help!!
Please let me know if you need any more Information. I'm very unexperienced with such forums. Sorry about that.
Walter | open | 2019-01-16T09:48:20Z | 2019-01-16T10:01:37Z | https://github.com/thtrieu/darkflow/issues/969 | [] | walterPeyton | 0 |
httpie/cli | python | 637 | Is there a way to catch response body of a Internal Server Error? | Hello, i need to get the response body of a Http Status Code 500 request, but i need that request itself to be an "exit 0" request. Is there a way to do this with httpie?
Example with curl:
```bash
curl --silent --show-error --fail -X GET --header "Accept: application/json" --header "token:mytoken" "https://myrequest.com" -d """"""
```
This request returns exit 0 without writing a bash script to parse the status code, but the downside is that i can't get the response body with this aproach.
Is there a way to do what i want with httpie **without** creating a script to parse the status codes?
Thanks in advance | closed | 2017-12-04T16:41:48Z | 2017-12-28T16:03:30Z | https://github.com/httpie/cli/issues/637 | [] | thiagobucca | 1 |
airtai/faststream | asyncio | 1,350 | Create docs according to design (image) | closed | 2024-04-08T06:03:49Z | 2024-04-15T06:06:12Z | https://github.com/airtai/faststream/issues/1350 | [
"documentation"
] | davorrunje | 0 | |
randyzwitch/streamlit-folium | streamlit | 82 | How to get lat/lon from mouse click position? | How can I get the GPS (lat/lon) point for the last clicked mouse position, and have this info displayed in the console continuously?
I have tried with the following code.
<details>
<summary>Click to see code</summary>
```python
import time
import streamlit as st
from streamlit_folium import st_folium
import folium
def draw_folium_map():
center = [53.4844, -2.2952]
tiles = ["cartodbpositron", "Stamen Toner", "OpenStreetMap"]
map = folium.Map(
location = [center[0], center[1]],
zoom_start = 15,
zoom_control = True,
scrollWheelZoom = False,
tiles = tiles[0],
)
folium.Marker(
location=[53.4844, -2.2952],
popup=f"A location!",
icon=folium.Icon(color="blue", icon="star"),
).add_to(map)
return map
#----------------------------------------------------------------------
# Main
#----------------------------------------------------------------------
m = draw_folium_map()
output = st_folium(m, key="map", width=650, height=600)
if not output['last_clicked'] == None:
print('map: ({}, {})'.format(output['last_clicked']['lat'], output['last_clicked']['lng']) )
st.write(output)
#xy = [ map['last_clicked']['lat'], map['last_clicked']['lng'] ]
#print('Lat: {}, Lon : {}'.format(xy[0], xy[1]))
while True:
try:
#if not map['last_clicked'] == None:
# print('map: ', map['last_clicked'])
time.sleep(2)
except KeyboardInterrupt:
print('\n [INFO] Program Interrupted by user.')
st.stop()
exit(1)
```
</details>
| closed | 2022-08-25T00:05:28Z | 2022-08-31T09:50:24Z | https://github.com/randyzwitch/streamlit-folium/issues/82 | [] | eabase | 2 |
huggingface/diffusers | pytorch | 10,194 | Full support for Flux attention masking | **Is your feature request related to a problem? Please describe.**
SimpleTuner/Kohya allow T5 attention masked training, however this is not currently supported natively in diffusers
**Describe the solution you'd like.**
Already implemented and used in Simpletuner and Kohya: https://github.com/bghira/SimpleTuner/blob/main/helpers/models/flux/transformer.py
**Describe alternatives you've considered.**
Recent implementation doesn't really solve the use case of using existing fine tunes with attention masking with diffusers
https://github.com/huggingface/diffusers/pull/10122
**Additional context.**
@yiyixuxu @bghira @AmericanPresidentJimmyCarter
@bghira's suggestion: "i'd suggested they add encoder_attention_mask and image_attention_mask and if image_attention_mask is None that they could then 1-fill those positions and just cat them together"
| open | 2024-12-11T19:07:38Z | 2025-01-11T15:02:41Z | https://github.com/huggingface/diffusers/issues/10194 | [
"stale"
] | squewel | 2 |
graphistry/pygraphistry | jupyter | 85 | About point_color | Is the point color we can use no more than 12 ?
What if my network has more over 12 modules, The rest modules would all be painting in black, so we can not distinguish those modules?
The question is how can I override the default edge and node coloring ?
I have read this tutorial, but still don't understand? Anyone can help me with it.
http://graphistry.github.io/pygraphistry/html/Tutorial%20Part%201%20(Honey%20Pot).html | closed | 2017-10-11T03:15:06Z | 2017-11-04T13:21:40Z | https://github.com/graphistry/pygraphistry/issues/85 | [] | RiptideBo | 1 |
ultralytics/yolov5 | pytorch | 12,925 | Multi-GPU train | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
First of all, thank you for your always kind and detailed answers!
I'm trying to train the yolov6_seg model with multi-gpu and I'm getting an error, I don't know which part I need to fix.
> train code
```
!python -m torch.distributed.launch --nproc_per_node 2 tools/train.py --batch 64 --conf configs/yolov6s_seg.py --epoch 150 --data ../FST1/data.yaml --device 0,1
```
> error
```
/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launch.py:183: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
[2024-04-15 19:19:43,314] torch.distributed.run: [WARNING]
[2024-04-15 19:19:43,314] torch.distributed.run: [WARNING] *****************************************
[2024-04-15 19:19:43,314] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-15 19:19:43,314] torch.distributed.run: [WARNING] *****************************************
Traceback (most recent call last):
Traceback (most recent call last):
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 143, in <module>
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 143, in <module>
main(args)main(args)
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 116, in main
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 116, in main
cfg, device, args = check_and_init(args)cfg, device, args = check_and_init(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 102, in check_and_init
File "/media/HDD/조홍석/YOLOv6/tools/train.py", line 102, in check_and_init
device = select_device(args.device)
device = select_device(args.device)
^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^ File "/media/HDD/조홍석/YOLOv6/yolov6/utils/envs.py", line 32, in select_device
^^^^
File "/media/HDD/조홍석/YOLOv6/yolov6/utils/envs.py", line 32, in select_device
assert torch.cuda.is_available()
AssertionError
assert torch.cuda.is_available()
AssertionError
[2024-04-15 19:19:48,319] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 29482) of binary: /home/dilab03/anaconda3/bin/python
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launch.py", line 198, in <module>
main()
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launch.py", line 194, in main
launch(args)
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launch.py", line 179, in launch
run(args)
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dilab03/anaconda3/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
tools/train.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-04-15_19:19:48
host : dilab03-Super-Server
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 29483)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-04-15_19:19:48
host : dilab03-Super-Server
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 29482)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Additional
_No response_ | closed | 2024-04-15T15:31:10Z | 2024-05-26T00:23:56Z | https://github.com/ultralytics/yolov5/issues/12925 | [
"question",
"Stale"
] | Cho-Hong-Seok | 2 |
ray-project/ray | data-science | 51,368 | [Core] Problems with uv run and remote cluster | ### What happened + What you expected to happen
I'm trying to use the new uv run feature of Ray 2.43.0, but I'm stumbling into an issue.
When I run it locally, things seems to work, but when I try to launch it on a remote cluster like this:
```
ray.init(
"ray://<internal_url>:10001",
runtime_env={
"py_executable": "uv run",
"working_dir": "/project/training/"
},
)
```
I get the following error:
```
Starting Ray client server failed. See ray_client_server_23070.err for detailed logs.
```
That log file has the following information:
```
Using CPython 3.10.12 interpreter at: /usr/bin/python3
Creating virtual environment at: .venv
Downloading pillow (4.3MiB)
Downloading nvidia-cudnn-cu12 (697.8MiB)
<...cut for brevity...>
Downloaded plotly
Downloaded torch
Installed 147 packages in 2.60s
2025-03-13 11:53:49,117 INFO server.py:898 -- Starting Ray Client server on 0.0.0.0:23070, args Namespace(host='0.0.0.0', port=23070, mode='specific-server', address='10.0.0.200:6379', redis_username=None, redis_password=None, runtime_env_agent_address=None)
2025-03-13 11:53:54,192 INFO server.py:944 -- 25 idle checks before shutdown.
2025-03-13 11:53:59,202 INFO server.py:944 -- 20 idle checks before shutdown.
2025-03-13 11:54:04,217 INFO server.py:944 -- 15 idle checks before shutdown.
2025-03-13 11:54:09,233 INFO server.py:944 -- 10 idle checks before shutdown.
2025-03-13 11:54:14,242 INFO server.py:944 -- 5 idle checks before shutdown.
```
I'm not sure how to debug from here. working_dir contains the pyproject.toml file. Normally it would start a worker node and start launching the job, but something goes wrong.
I am using a custom Docker image for running ray, but not sure if that affects anything. We have been running ray with pip for a long time.
Using {"pip": [...requirements...]} works fine with the same setup, but takes a very long time.
### Versions / Dependencies
ray[serve,train,tune,data]==2.43.0
### Reproduction script
Following script has less dependencies compared to log above, but reproduces the same way:
#### /project/uv-test/main.py:
```python
import cowsay
import ray
@ray.remote
def f():
return cowsay.cow("Hello, world!")
ray.init(
"ray://<internal_url>:10001",
# This doesn't work:
runtime_env={"working_dir": "/project/uv-test/", "py_executable": "uv run"},
# This works:
# runtime_env={
# "working_dir": "/project/uv-test/",
# "pip": ["cowsay", "ray[serve,train,tune,data]==2.43.0"],
# },
)
print(ray.get([f.remote()]))
```
#### /project/uv-test/pyproject.toml
```
[project]
name = "example"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"ray[serve,train,tune,data]==2.43.0",
"cowsay",
]
```
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-14T09:26:59Z | 2025-03-21T21:52:42Z | https://github.com/ray-project/ray/issues/51368 | [
"bug",
"P1",
"core",
"core-runtime-env",
"uv"
] | sveint | 4 |
widgetti/solara | jupyter | 221 | What happened to Switch? | I'm new to solara and experimenting, now by taking a streamlit project and changing the front end. Streamlit has these expandable containers. Looking at the online API documentation, Switch looked like an easy way to mimmick this functionality. The following code is based on the Switch example, but when I run it, I get the message that the solara module does not contain Switch:
```
solara.Switch(label="For more information .turn on the switch", value=show_more)
if show_more.value:
solara.Markdown(f"This chart shows ... ")
```
Am I doing something wrong? I'm running version 1.19. So far everything else is working pretty well.
If Switch is no longer available, can you recommend an alternative approach? I tried using Button to set show_more, but while the button seems to work, I haven't figured out the real time framework very well yet.
| closed | 2023-07-28T16:48:43Z | 2023-07-28T19:48:10Z | https://github.com/widgetti/solara/issues/221 | [] | stepsbystep | 3 |
fastapi/sqlmodel | fastapi | 504 | underscore_attrs_are_private cannot use | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import SQLModel
class UserBase(SQLModel):
first_name: str
last_name: str
class UserSchema(UserBase):
class Config:
underscore_attrs_are_private = True
if __name__ == "__main__":
print("Hello world!")
```
### Description
File "c:\Users\dongf\Source\Python\test\main.py", line 9, in <module>
class UserSchema(UserBase):
File "C:\Users\dongf\Source\Python\test\venv\Lib\site-packages\sqlmodel\main.py", line 272, in __new__
new_cls = super().__new__(cls, name, bases, dict_used, **config_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 283, in pydantic.main.ModelMetaclass.__new__
File "<frozen abc>", line 106, in __new__
TypeError: __weakref__ slot disallowed: either we already got one, or __itemsize__ != 0
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
Python 3.11.0, Python 3.9.0
### Additional Context
_No response_ | open | 2022-11-22T06:44:57Z | 2022-11-22T08:24:56Z | https://github.com/fastapi/sqlmodel/issues/504 | [
"question"
] | dongfengweixiao | 1 |
pytorch/pytorch | numpy | 149,533 | UnsupportedOperatorException: aten._fft_r2c.default | ### 🐛 Describe the bug
We ran into this error when trying to convert the <a href="https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa">VoxLingua107 ECAPA-TDNN Spoken Language Identification Model</a> to ONNX. Replication and error outputs are below; a more verbose logs file is attached as well.
[onnx_export_logs.md](https://github.com/user-attachments/files/19347887/onnx_export_logs.md)
### Steps to replicate the error (using Linux machine):
We followed the README for Linux to download and build Pytorch on a Conda environment, but checked out the commit at <a href="https://github.com/pytorch/pytorch/commit/f89309fb732f93a21b5a3e49124623949b20c7dc">f89309f</a>. The next steps detail how to replicate the error we encountered when exporting the VoxLingua model.
1. Install speechbrain dependencies:
```
pip install git+https://github.com/speechbrain/speechbrain.git@develop
```
2. Set up VoxLingua project in new Python file:
```
import torch
import torchaudio
from speechbrain.inference.classifiers import EncoderClassifier
import torch.onnx
language_id = EncoderClassifier.from_hparams(source="speechbrain/lang-id-voxlingua107-ecapa", savedir="tmp")
# Create dummy audio signal data
signal = torch.zeros(48000)
prediction = language_id.classify_batch(signal)
print(prediction)
```
3. Add torch.onnx command to end of Python file:
```
torch.onnx.export(language_id, signal, "langid.onnx", export_params=True,
do_constant_folding=True, input_names=['input'], output_names=['output'],
dynamic_axes={'input' : {0 : 'batch_size'}}, dynamo=True, report=True)
```
4. Run in conda environment:
```
python3 <FILENAME>.py
```
### Error message:
```
torch._subclasses.fake_tensor.UnsupportedOperatorException: aten._fft_r2c.default
```
### Stack trace:
```
Traceback (most recent call last):
File "/home/convertonnx.py", line 14, in <module>
torch.onnx.export(language_id, signal, "langid.onnx", export_params=True,
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
do_constant_folding=True, input_names=['input'], output_names=['output'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dynamic_axes={'input' : {0 : 'batch_size'}}, dynamo=True, report=True) #variable length axes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/__init__.py", line 351, in export
return _compat.export_compat(
~~~~~~~~~~~~~~~~~~~~~^
model,
^^^^^^
...<19 lines>...
fallback=fallback,
^^^^^^^^^^^^^^^^^^
)
^
File "/home/anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/_internal/exporter/_compat.py", line 304, in export_compat
onnx_program = _core.export(
model,
...<11 lines>...
verbose=verbose,
)
File "/home//anaconda3/envs/pytorch_conda/lib/python3.13/site-packages/torch/onnx/_internal/exporter/_core.py", line 1292, in export
raise _errors.TorchExportError(
...<7 lines>...
) from first_error
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.13.2 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:02) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Stepping: 1
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 320 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] triton==3.2.0
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.14.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | closed | 2025-03-19T17:37:55Z | 2025-03-21T20:30:06Z | https://github.com/pytorch/pytorch/issues/149533 | [
"module: onnx",
"triaged",
"oncall: pt2",
"oncall: export"
] | ivyw-ts | 7 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 955 | zsh: illegal hardware instruction python demo_toolbox.py | When i'm running ‘python demo_toolbox.py’,The console prompts me‘zsh: illegal hardware instruction python demo_toolbox.py’;
My operating environment is macbook(m1), python 3.9.9,pytorch 1.10.0;
what should i do now; | closed | 2021-12-21T14:50:40Z | 2021-12-28T12:34:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/955 | [] | absc | 0 |
encode/uvicorn | asyncio | 1,713 | GitHub TestSuite checks are failing | I think this might be due to #1696, the pip upgrade fails for Windows, and others are cascaded into cancellation.
```
Run scripts/install
scripts/install
shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
env:
pythonLocation: C:\hostedtoolcache\windows\Python\3.7.9\x64
PKG_CONFIG_PATH: C:\hostedtoolcache\windows\Python\3.7.9\x64/lib/pkgconfig
Python_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.7.9\x64
Python[2](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:2)_ROOT_DIR: C:\hostedtoolcache\windows\Python\[3](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:3).7.9\x6[4](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:4)
Python3_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.7.9\x[6](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:6)4
+ '[' -z true ']'
+ PIP=pip
+ pip install -U pip
Requirement already satisfied: pip in c:\hostedtoolcache\windows\python\3.[7](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:7).[9](https://github.com/encode/uvicorn/actions/runs/3264291960/jobs/5364701055#step:4:9)\x64\lib\site-packages (22.2.2)
Collecting pip
Downloading pip-22.3-py3-none-any.whl (2.1 MB)
ERROR: To modify pip, please run the following command:
---------------------------------------- 2.1/2.1 MB 32.4 MB/s eta 0:00:00
c:\hostedtoolcache\windows\python\3.7.9\x64\python.exe -m pip install -U pip
Notice: A new release of pip available: 22.2.2 -> 22.3
Notice: To update, run: python.exe -m pip install --upgrade pip
Error: Process completed with exit code 1.
``` | closed | 2022-10-18T19:13:24Z | 2022-10-19T08:48:52Z | https://github.com/encode/uvicorn/issues/1713 | [] | adnaanbheda | 3 |
JaidedAI/EasyOCR | pytorch | 666 | how can I convert the pretrained .pth models to .bin for libtorch? steve8000818@gmail.com | how can I convert the pretrained .pth models to .bin for libtorch? steve8000818@gmail.com | closed | 2022-02-14T23:32:45Z | 2022-08-25T10:52:30Z | https://github.com/JaidedAI/EasyOCR/issues/666 | [] | nissansz | 0 |
huggingface/transformers | python | 36,495 | `_load_state_dict_into_meta_model` - `'NoneType' object has no attribute 'load_state_dict'` | https://github.com/huggingface/diffusers/actions/runs/13615360562/job/38057746315?pr=10898
```
model = StableDiffusionSafetyChecker(
(vision_model): CLIPVisionModel(
(vision_model): CLIPVisionTransformer(
(emb...=1e-05, elementwise_affine=True)
)
)
(visual_projection): Linear(in_features=32, out_features=64, bias=False)
)
state_dict = {'concept_embeds': tensor([[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1...., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]), 'special_care_embeds_weights': tensor([1., 1., 1.]), ...}
start_prefix = ''
expected_keys = ['concept_embeds', 'special_care_embeds', 'concept_embeds_weights', 'special_care_embeds_weights', 'vision_model.vision_model.embeddings.class_embedding', 'vision_model.vision_model.embeddings.patch_embedding.weight', ...]
device_map = None, offload_folder = None, offload_index = None
state_dict_folder = None, state_dict_index = None, dtype = torch.float16
hf_quantizer = None, is_safetensors = False, keep_in_fp32_modules = None
unexpected_keys = [], device_mesh = None
shard_file = '/github/home/.cache/huggingface/hub/models--hf-internal-testing--tiny-stable-diffusion-pipe/snapshots/3ee6c9f225f088ad5d35b624b6514b091e6a4849/safety_checker/pytorch_model.bin'
@torch.no_grad()
def _load_state_dict_into_meta_model(
model: torch.nn.Module,
state_dict: Dict[str, torch.Tensor],
start_prefix,
expected_keys,
device_map=None,
offload_folder=None,
offload_index=None,
state_dict_folder=None,
state_dict_index=None,
dtype=None,
hf_quantizer=None,
is_safetensors=False,
keep_in_fp32_modules=None,
unexpected_keys=None, # passing `unexpected` for cleanup from quantization items
device_mesh=None,
shard_file=None,
):
"""
This is somewhat similar to `_load_state_dict_into_model`, but deals with a model that has some or all of its
params on a `meta` device. It replaces the model params with the data from the `state_dict`, while moving the
params back to the normal device, but only for `loaded_state_dict_keys`.
`start_prefix` is used for models which insert their name into model keys, e.g. `bert` in
`bert.pooler.dense.weight`
It also initialize tensor parallelism for each module if needed.
"""
tensor_device = None
if device_map is not None and device_map.get("", None) is not None:
tensor_device = device_map[""].index if isinstance(device_map[""], torch.device) else device_map[""]
if device_map is not None:
device_map_regex = "|".join(sorted(device_map.keys(), reverse=True))
# we need this later to initialize tensor parallelism
if device_mesh is not None:
full_tp_plan = model.config.base_model_tp_plan
for submodule in model.modules():
full_tp_plan.update(getattr(submodule, "_tp_plan", {}))
file_pointer = None
bin_state_dict = None
if shard_file.endswith(".safetensors"):
file_pointer = safe_open(shard_file, framework="pt", device=tensor_device)
else:
bin_state_dict = load_state_dict(shard_file, map_location="cpu")
error_msgs = []
is_quantized = hf_quantizer is not None
is_torch_e4m3fn_available = hasattr(torch, "float8_e4m3fn")
for serialized_param_name, empty_param in state_dict.items():
# serialized_param_name is the raw, serialized name
# fixed_param_name is the model's equivalent
fixed_param_name, _ = model.rename_key(serialized_param_name)
if fixed_param_name not in expected_keys:
continue
# we need to use serialized_param_name as file pointer is untouched
param = (
file_pointer.get_slice(serialized_param_name)
if shard_file.endswith(".safetensors")
else bin_state_dict[serialized_param_name]
)
# We convert floating dtypes to the `dtype` passed except for float8_e4m3fn type. We also want to keep the buffers/params
# in int/uint/bool and not cast them.
param_casting_dtype = None
is_param_float8_e4m3fn = is_torch_e4m3fn_available and empty_param.dtype == torch.float8_e4m3fn
if dtype is not None and empty_param.dtype.is_floating_point and not is_param_float8_e4m3fn:
if (
keep_in_fp32_modules is not None
and keep_in_fp32_modules.search(fixed_param_name)
and dtype == torch.float16
):
param_casting_dtype = torch.float32
else:
param_casting_dtype = dtype
if device_mesh is not None: # In this case, the param is already on the correct device!
module_to_tp, param_type = find_submodule_and_param_name(model, fixed_param_name)
current_module_plan = None
full_tp_plan_ = "|".join(full_tp_plan.keys()).replace("*", "[0-9]+")
if plan := re.search(full_tp_plan_, fixed_param_name):
match = re.sub("[0-9]+", "*", plan[0])
current_module_plan = full_tp_plan[match]
if current_module_plan is not None:
tp_layer = translate_to_torch_parallel_style(current_module_plan)
rank = tensor_device
row, col = empty_param.shape
if "rowwise" == current_module_plan:
param = param[:, rank * (col // device_mesh.size()) : (rank + 1) * (col // device_mesh.size())]
shard = Shard(1)
tp_layer.desired_input_layouts = (Shard(-1),)
elif "colwise" == current_module_plan:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
else:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
if param_casting_dtype is not None and param_casting_dtype != empty_param.dtype:
param = param.to(param_casting_dtype)
local_parameter = DTensor.from_local(
param,
device_mesh=device_mesh,
placements=[shard] * device_mesh.ndim,
)
if isinstance(module_to_tp.weight, nn.Parameter):
local_parameter = torch.nn.Parameter(local_parameter)
module_to_tp.weight = local_parameter
input_fn = partial(tp_layer._prepare_input_fn, tp_layer.input_layouts, tp_layer.desired_input_layouts)
output_fn = partial(tp_layer._prepare_output_fn, tp_layer.output_layouts, tp_layer.use_local_output)
distribute_module(module_to_tp, device_mesh, None, input_fn, output_fn)
else:
module_to_tp.load_state_dict({param_type: param[:]}, strict=False, assign=True)
else:
if device_map is None:
param_device = "cpu"
else:
module_layer = re.search(device_map_regex, fixed_param_name)
if not module_layer:
raise ValueError(f"{fixed_param_name} doesn't have any device set.")
else:
param_device = device_map[module_layer.group()]
if param_device == "disk":
if not is_safetensors:
offload_index = offload_weight(param[:], fixed_param_name, offload_folder, offload_index)
elif param_device == "cpu" and state_dict_index is not None:
state_dict_index = offload_weight(param[:], fixed_param_name, state_dict_folder, state_dict_index)
elif (
not is_quantized
or (not hf_quantizer.requires_parameters_quantization)
or (
not hf_quantizer.check_quantized_param(
model,
param,
fixed_param_name,
state_dict,
param_device=param_device,
device_map=device_map,
)
)
):
if is_fsdp_enabled():
param_device = "cpu" if is_local_dist_rank_0() else "meta"
module, param_type = find_submodule_and_param_name(model, fixed_param_name)
if param_casting_dtype is not None and param_casting_dtype != empty_param.dtype:
param = param[:].to(param_casting_dtype)
> module.load_state_dict(
{param_type: param[:].to(param_device)},
strict=False,
assign=True,
)
E AttributeError: 'NoneType' object has no attribute 'load_state_dict'
``` | closed | 2025-03-02T12:34:36Z | 2025-03-03T17:53:31Z | https://github.com/huggingface/transformers/issues/36495 | [] | hlky | 5 |
coqui-ai/TTS | pytorch | 2,494 | Reporting a vulnerability | Hello!
I hope you are doing well!
We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.
Can you enable it, so that we can report it?
Thanks in advance!
PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository | closed | 2023-04-10T11:04:25Z | 2023-05-12T13:53:40Z | https://github.com/coqui-ai/TTS/issues/2494 | [
"wontfix"
] | igibek | 4 |
WeblateOrg/weblate | django | 13,908 | Lower automatic suggestions threshold in case of no results | ### Describe the problem
In automatic suggestions, the threshold to show them (75 as written [here](https://github.com/WeblateOrg/weblate/discussions/10933#discussioncomment-8347434)) is sometimes too high and useful suggestions are omitted. As a workaround, user can enter the string into the "Translation memory" field and see the search results where the threshold is lower (10).
### Describe the solution you would like
For me, it would make sense to lower the threshold automatically until a first result is found.
As an alternative, a more convenient and helpful UI could be introduced - e.g. a button which would perform the search without the need to copypaste the string manually (or a button which would copy the string to the "Translation memory" field).
### Describe alternatives you have considered
_No response_
### Screenshots
An example from The Document Foundation instance of Weblate:
There are strings:
* [original](https://translations.documentfoundation.org/translate/libo_ui-25-2/filtermessages/cs/?checksum=75bbf32d6ede6a97): "This setting enables you to export the document as a .pdf file containing two file formats: PDF and ODF."
* and [newer](https://translations.documentfoundation.org/translate/libo_ui-master/filtermessages/cs/?checksum=86a111d3787c64bc) "This setting enables you to export the document as a .pdf file containing two file formats: PDF and ODF as an attachment."
differing only in the last "as an attachment" text.
However, the original string is not shown in the suggestions for the newer one (but translators would expect it):

If using the search, the string is there:

(Which corresponds with the thresholds: for these strings, thresholds 75 and 10 mean similarities 97 % and 62 %; the suggestion has 85 %).
### Additional context
_No response_ | closed | 2025-02-17T20:31:57Z | 2025-02-24T18:56:41Z | https://github.com/WeblateOrg/weblate/issues/13908 | [
"bug"
] | strepon | 8 |
fastapi/sqlmodel | fastapi | 61 | Can SQLModel be more generic in its implementations? Swappable backends? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Description
SQLModel has emerged as the long-awaited missing link that uses type annotations to build generic data models to bridge the gap between data validators (like Pydantic) and ORM layer (like SQLAlchemy).
However at the moment SQLModel is tied directly to Pydantic and SQLAlchemy, and works only with Pydantic and SQLAlchemy.
I wonder if SQLModel can be a more generic bridge, to be used between _any_ modern data validator, and ORM.
### Wanted Solution
I'd like a stand-alone version of SQLModel that can be installed without requiring Pydantic and SQLModel to be installed along with it.
### Operating System
Linux
### Operating System Details
N/A
### SQLModel Version
0.0.4
### Python Version
3.8
### Additional Context
_No response_ | open | 2021-08-31T05:18:42Z | 2023-04-24T06:50:05Z | https://github.com/fastapi/sqlmodel/issues/61 | [
"feature"
] | ashleysommer | 2 |
miguelgrinberg/flasky | flask | 270 | Account confirming problem | Hi Mr.Grinberg & everyone:
I've got an issue about account comfirming:
When I click the link from the email sent by my webapp, but failed comfirming:
The webpage flashed 'You have confirmed your account. Thanks! ',BUT the url was still '/auth/unconfirmed', and I check the data through sqlite3, the value of myaccount.confirmed = 0.
I tried several times sending emails and click the link to confirm , but never succeeded.
You can check my repositories named 'FLAPP' for further information, it's generally same as tag-8e.
Hope u can answer this. | closed | 2017-05-28T03:42:12Z | 2017-05-28T08:29:12Z | https://github.com/miguelgrinberg/flasky/issues/270 | [
"question"
] | butflame | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 603 | 关于Vit的pos embedding | 您好!有两个问题想询问一下
1.讲解的时候说明,用1D位置编码 ,9个patch已经按照1-9 排好了;但在代码中为什么没有用到1-9这个位置顺序,需要学习才能得到?
<img width="785" alt="image" src="https://user-images.githubusercontent.com/54054113/180702963-d63a9c27-6237-4f92-b2dc-52c2a7f90ad0.png">
2.为什么class_token也需要pos_position,他不是固定好放在第0个位置吗?
谢谢 | closed | 2022-07-25T05:12:14Z | 2022-08-06T09:00:51Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/603 | [] | Liqq1 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.