repo_name stringlengths 9 75 | topic stringclasses 30 values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2 values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
robinhood/faust | asyncio | 270 | Table Length Method Issue | When iterating through the table, we only display the active keys. But when we use `len(table)`, it actually counts both active and inactive keys: https://github.com/robinhood/faust/blob/master/faust/stores/rocksdb.py#L306. We should only display number of active keys to make it consistent with iterkey and iteritems. @ask let me know if this is a good feature to change or if there is any potential side effect if we change this. | closed | 2019-01-09T23:43:38Z | 2019-01-15T00:29:44Z | https://github.com/robinhood/faust/issues/270 | [] | allisonwang | 1 |
waditu/tushare | pandas | 1,028 | 每日指标中的自由流通股本为负数 | `{
"api_name":"daily_basic",
"token":"XXXXXXXXXXXX",
"params":{
"trade_date":"20190424"
},
"fields":["trade_date", "ts_code", "pe", "pe_ttm", "pb", "ps", "ps_ttm", "total_share", "float_share", "free_share"]
}`
调用的时候 “600657” 这一只股票的由流通股本为负数 "-5719.7841"
Tushare ID: 261375 | closed | 2019-05-02T15:59:30Z | 2019-07-02T14:07:20Z | https://github.com/waditu/tushare/issues/1028 | [] | musicaudience | 3 |
ipython/ipython | data-science | 13,926 | capture_output does not respect trailing semicolon | By convention, IPython suppresses the return value of the last expression if it ends with a trailing semicolon. `%%capture` and `utils.capture.capture_output` don't respect this convention:
```python
In [1]: %%capture out
...: x = 1
...: 1;
In [2]: out.outputs[0]
Out[2]: 1
```
Related: #10227 #13841 (but it's about a different problem) | open | 2023-02-04T15:34:49Z | 2023-02-12T01:37:28Z | https://github.com/ipython/ipython/issues/13926 | [
"magics"
] | akhmerov | 1 |
MaartenGr/BERTopic | nlp | 1,626 | Documents and Topics are different lengths and cannot merge the topics | Hi there Maarten!
Thank you for all of the support you are offering the community!
I am running into the problem that my docs and topics have different lengths after I ran the model. I do not have the possibility to run the model again, but because of this issue, I cannot plot the topics over time or merge them.
Would it be possible to suggest how to proceed from here?
<img width="1218" alt="Screenshot 2023-11-10 at 14 35 04" src="https://github.com/MaartenGr/BERTopic/assets/28362483/6bcc9c34-4cb7-4e68-98fd-67281de2b126">
Thank you in advance!
| open | 2023-11-10T13:35:48Z | 2023-11-27T16:38:15Z | https://github.com/MaartenGr/BERTopic/issues/1626 | [] | daianacric95 | 8 |
nicodv/kmodes | scikit-learn | 104 | Kprototypes unexpected keyword argument 'n_jobs' | I notice that when trying to run:
`km = KPrototypes(n_clusters=20, init='Huang', n_init=1, n_jobs=-1, verbose=1)`
that I get an error that says:
`__init__() got an unexpected keyword argument 'n_jobs'`
I noticed that this should be fixed in the next release - when will that be? | closed | 2019-02-18T19:05:26Z | 2019-02-18T19:21:02Z | https://github.com/nicodv/kmodes/issues/104 | [] | klepikhina | 0 |
gradio-app/gradio | data-visualization | 9,969 | Add a setting that allows users to customize the tab placement (`Left`, `Center`, or `Right`). | - [x] I have searched to see if a similar issue already exists.
I think many would agree that it would be convenient to place some tabs in different locations. For example, the "Settings" tab could be located on the right, and the "INFO" tab could be on the left. The main tabs could be placed in the center or in another convenient location for users.
It would also be nice to have the ability to arrange tabs in a column instead of in a single line.
These changes could significantly improve some interfaces and make them more user-friendly.
| open | 2024-11-16T09:11:40Z | 2025-01-11T12:41:36Z | https://github.com/gradio-app/gradio/issues/9969 | [
"enhancement"
] | Bebra777228 | 1 |
Gerapy/Gerapy | django | 115 | 域名注册 | closed | 2019-10-06T15:28:51Z | 2019-10-06T15:32:31Z | https://github.com/Gerapy/Gerapy/issues/115 | [] | Germey | 0 | |
horovod/horovod | deep-learning | 3,589 | Can not run `RayExecutor` | Code:
```python
import ray
from horovod.ray import RayExecutor
import horovod.torch as hvd
# Start the Ray cluster or attach to an existing Ray cluster
ray.init()
num_workers = 1
# Start num_workers actors on the cluster
settings = RayExecutor.create_settings(timeout_s=30)
executor = RayExecutor(
settings, num_workers=num_workers,cpus_per_worker=1, use_gpu=True)
# This will launch `num_workers` actors on the Ray Cluster.
executor.start()
# Using the stateless `run` method, a function can take in any args or kwargs
def simple_fn():
hvd.init()
print("hvd rank", hvd.rank())
return hvd.rank()
# Execute the function on all workers at once
result = executor.run(simple_fn)
print(result)
executor.shutdown()
```
Result:
```python
2022-06-28 22:05:33,656 INFO services.py:1477 -- View the Ray dashboard at http://127.0.0.1:8266
(BaseHorovodWorker pid=14896) *** SIGSEGV received at time=1656453935 on cpu 0 ***
(BaseHorovodWorker pid=14896) PC: @ 0x7f80eff99fcc (unknown) horovod::common::(anonymous namespace)::BackgroundThreadLoop()
(BaseHorovodWorker pid=14896) @ 0x7f833f117980 4560 (unknown)
(BaseHorovodWorker pid=14896) @ 0x7f833cbffaa3 24 execute_native_thread_routine
(BaseHorovodWorker pid=14896) @ ... and at least 4 more frames
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,709 E 14896 14934] logging.cc:325: *** SIGSEGV received at time=1656453935 on cpu 0 ***
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,709 E 14896 14934] logging.cc:325: PC: @ 0x7f80eff99fcc (unknown) horovod::common::(anonymous namespace)::BackgroundThreadLoop()
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ 0x7f833f117980 4560 (unknown)
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ 0x7f833cbffaa3 24 execute_native_thread_routine
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ ... and at least 4 more frames
(BaseHorovodWorker pid=14896) Fatal Python error: Segmentation fault
(BaseHorovodWorker pid=14896)
2022-06-28 22:05:35,839 WARNING worker.py:1728 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff192c4de9ae2ac15aafc2ab1801000000 Worker ID: 64a4d8791c417ee4cc7d4ea3473dcb4fa6303705700c01ddc8bfd3fa Node ID: 78571af15b4fb6c918265637b8618542b59fd3c7f493a81a0b6b7569 Worker IP address: 10.0.2.180 Worker port: 33549 Worker PID: 14896 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
Traceback (most recent call last):
File "test_0628.py", line 25, in <module>
result = executor.run(simple_fn)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 376, in run
return self._maybe_call_ray(self.adapter.run, **kwargs_)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 421, in _maybe_call_ray
return driver_func(**kwargs)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 599, in run
return ray.get(self._run_remote(fn=f))
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/_private/worker.py", line 2176, in get
raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
class_name: BaseHorovodWorker
actor_id: 192c4de9ae2ac15aafc2ab1801000000
pid: 14896
namespace: acb48abc-7432-420d-923f-2d19a510800c
ip: 10.0.2.180
The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
None
Exception ignored in: <function ActorHandle.__del__ at 0x7f93cce2ef70>
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/actor.py", line 1029, in __del__
AttributeError: 'NoneType' object has no attribute 'worker'
``` | closed | 2022-06-28T22:41:00Z | 2025-03-20T19:51:17Z | https://github.com/horovod/horovod/issues/3589 | [
"bug"
] | JiahaoYao | 1 |
SciTools/cartopy | matplotlib | 1,803 | Lat and lon labels in LambertConformal at edges not shown | ### Description
I want to plot data in the LambertConformal projection, 'cut out' the figure into the shape of the projection, and add axis labels. However, the labels that belong in the corners (minimum and maximum latitude and longitude) are not shown. See image below: I would like to also have the labels for 40N, 80N, 80W and 20E. These are included in the array that I give as input to the `xlocator` and `ylocator` of the `gridliner` object. How can I ensure that these labels in the corners are also shown?

<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import matplotlib.path as mpath
import matplotlib.ticker as mticker
bounds_lon = [-80,20]
bounds_lat = [40,80]
projection = ccrs.LambertConformal(central_longitude=np.mean(bounds_lon),central_latitude=np.mean(bounds_lat))
fig, ax = plt.subplots(1,1,figsize=(24,8),subplot_kw={'projection': projection})
ax.coastlines()
# Create nicely shaped boundaries (based on https://stackoverflow.com/questions/65687535/how-do-i-limit-the-longitude-extent-in-cartopys-lambertconformal-and-keep-the-c)
rect = mpath.Path([[bounds_lon[0], bounds_lat[0]],
[bounds_lon[1], bounds_lat[0]],
[bounds_lon[1], bounds_lat[1]],
[bounds_lon[0], bounds_lat[1]],
[bounds_lon[0], bounds_lat[0]],
]).interpolated(20)
proj_to_data = ccrs.PlateCarree()._as_mpl_transform(ax) - ax.transData
rect_in_target = proj_to_data.transform_path(rect)
ax.set_boundary(rect_in_target)
ax.set_extent([bounds_lon[0], bounds_lon[1], bounds_lat[0] - 15, bounds_lat[1]])
# Set axes labels
gl = ax.gridlines(crs=ccrs.PlateCarree(), x_inline=False, y_inline=False,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
gl.xlocator = mticker.FixedLocator(np.arange(bounds_lon[0],bounds_lon[1]+0.1,10))
gl.ylocator = mticker.FixedLocator(np.arange(bounds_lat[0],bounds_lat[1]+0.1,10))
gl.left_labels = True
gl.bottom_labels = True
plt.show()
```
#### Traceback
```
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
### Cartopy version
### conda list
```
```
### pip list
```
```
</details>
| closed | 2021-06-10T12:45:11Z | 2021-06-10T14:45:20Z | https://github.com/SciTools/cartopy/issues/1803 | [] | MiriamSterl | 1 |
plotly/dash-core-components | dash | 147 | Markdown (and probably SyntaxHighlighter) default `children` value needs update | With dcc 0.15.4 and:
> The children property of dash_core_components.Markdown and dash_core_components.SyntaxHighlighter now accepts an array of strings (previously it had to be a string). Now, if an array is provided, it is collapsed into a string with line breaks (see #134).
now creating `dcc.Markdown()` component without any children value, for an empty component that will be later filled with a callback crashes in React constructor:
```
// must be a string or an array of strings
if(typeof props.children !== 'string') {
props.children = props.children.join('\n');
}
```
as `children` is not a string, but the default `children` value that crashes on `.join()` call (at least that is what browser console tells me): `TypeError: Cannot read property 'join' of null`.
I think that the default value for `dcc.Markdown.children` should be `""` or empty list.
Or if providing `children` value is required, then Python should catch components without it and raise an exception, as it is it was really hard to find the error.
| closed | 2018-01-18T16:22:00Z | 2018-01-18T22:47:02Z | https://github.com/plotly/dash-core-components/issues/147 | [
"dash-type-bug"
] | radekwlsk | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 683 | M1 Mac Heating Up while processing | Hi - I have an M1 Mac running Ventura 13.4 and Iam facing a similar issue. The processing time is much longer than it is on a windows PC (5-10 mins for an average 10MB song) and the mac heats up significantly while UVR is under processing.
| open | 2023-07-21T08:16:05Z | 2023-08-23T20:57:00Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/683 | [] | rajeevbat | 8 |
microsoft/nni | machine-learning | 5,501 | Error occurs when pip install nni[SMAC] | **Describe the issue**:
When I use `pip install nni[SMAC]` , error occurs. The error is as follows. I don't know how to solve it. Thanks for your answer!
```
Collecting ConfigSpaceNNI>=0.4.7.3
Downloading http://mirrors.aliyun.com/pypi/packages/35/c7/e3b8b1d662498a92fa2913d9c7c2134b4831820c8a13de962b987c0acb18/ConfigSpaceNNI-0.4.7.3.tar.gz (108 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 108.5/108.5 kB 2.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [40 lines of output]
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/_distutils/extension.py:134: UserWarning: Unknown Extension options: 'compiler_directives'
warnings.warn(msg)
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py:770: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
WARNING: The repository located at mirrors.aliyun.com is not a trusted or secure host and is being ignored. If this repository is available via HTTPS we recommend you use HTTPS instead, otherwise you may silence this warning and allow it anyway with '--trusted-host mirrors.aliyun.com'.
ERROR: Could not find a version that satisfies the requirement Cython (from versions: none)
ERROR: No matching distribution found for Cython
Traceback (most recent call last):
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py", line 82, in fetch_build_egg
subprocess.check_call(cmd)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/home/lvqinyi/miniconda3/envs/sunze/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp8spnbdat', '--quiet', 'Cython']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-tzzzixvz/configspacenni_126c6bee502a4fe7b51e4ef98928bc8c/setup.py", line 56, in <module>
setup(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/__init__.py", line 86, in setup
_install_setup_requires(attrs)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py", line 874, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 789, in resolve
dist = best[req.key] = env.best_match(
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1075, in best_match
return self.obtain(req, installer)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1087, in obtain
return installer(requirement)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/dist.py", line 944, in fetch_build_egg
return fetch_build_egg(self, req)
File "/home/lvqinyi/miniconda3/envs/sunze/lib/python3.9/site-packages/setuptools/installer.py", line 84, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['/home/lvqinyi/miniconda3/envs/sunze/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp8spnbdat', '--quiet', 'Cython']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Python version: 3.9.13
- PyTorch version: 1.12.0
- Is conda/virtualenv/venv used?: conda used
- Is running in Docker?: No
| closed | 2023-04-03T14:36:06Z | 2023-04-04T07:14:56Z | https://github.com/microsoft/nni/issues/5501 | [] | sunze992 | 0 |
tox-dev/tox | automation | 3,468 | Allow disabling plugins via CLI | ## Issue
```
# inside tox own codebase:
tox -e 3.13 -- -k test_provision_requires_ok
```
Note that version of python is irrelevant, as it does reproduce with all supported versions.
I also tried to modify the failing test to compensate for the missing pip, but it still failed with another error, now complaining about missing `hatchling`... and adding hatchling did not address this one. So we might have more than one bug here.
```
proj = tox_project({"tox.ini": "[tox]\nrequires=demo-pkg-inline\n[testenv]\npackage=skip\ndeps=pip"})
```
## Environment
Provide at least:
- OS:
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
❯ tox -e 3.10 -- -k test_provision_requires_ok
3.10: venv> /Users/ssbarnea/.config/mise/installs/python/3.13.0/bin/uv venv -p 3.10 --allow-existing /Users/ssbarnea/code/os/tox/.tox/3.10
3.10: install_dependency-groups> /Users/ssbarnea/.config/mise/installs/python/3.13.0/bin/uv pip install 'build[virtualenv]>=1.2.2.post1' 'covdefaults>=2.3' 'detect-test-pollution>=1.2' 'devpi-process>=1.0.2' 'diff-cover>=9.2' 'distlib>=0.3.9' 'flaky>=3.8.1' 'hatch-vcs>=0.4' 'hatchling>=1.26.3' 'psutil>=6.1' 'pytest-cov>=5' 'pytest-mock>=3.14' 'pytest-xdist>=3.6.1' 'pytest>=8.3.3' 're-assert>=1.1' 'setuptools>=75.1; python_version <= "3.8"' 'setuptools>=75.6; python_version > "3.8"' 'time-machine>=2.15; implementation_name != "pypy"' 'wheel>=0.45'
.pkg: _optional_hooks> python /Users/ssbarnea/.config/mise/installs/python/3.13.0/lib/python3.13/site-packages/pyproject_api/_backend.py True hatchling.build
.pkg: get_requires_for_build_wheel> python /Users/ssbarnea/.config/mise/installs/python/3.13.0/lib/python3.13/site-packages/pyproject_api/_backend.py True hatchling.build
.pkg: get_requires_for_build_editable> python /Users/ssbarnea/.config/mise/installs/python/3.13.0/lib/python3.13/site-packages/pyproject_api/_backend.py True hatchling.build
.pkg: build_wheel> python /Users/ssbarnea/.config/mise/installs/python/3.13.0/lib/python3.13/site-packages/pyproject_api/_backend.py True hatchling.build
3.10: install_package_deps> /Users/ssbarnea/.config/mise/installs/python/3.13.0/bin/uv pip install 'cachetools>=5.5' 'chardet>=5.2' 'colorama>=0.4.6' 'filelock>=3.16.1' 'packaging>=24.2' 'platformdirs>=4.3.6' 'pluggy>=1.5' 'pyproject-api>=1.8' 'tomli>=2.1; python_version < "3.11"' 'typing-extensions>=4.12.2; python_version < "3.11"' 'virtualenv>=20.27.1'
3.10: install_package> /Users/ssbarnea/.config/mise/installs/python/3.13.0/bin/uv pip install --reinstall --no-deps tox@/Users/ssbarnea/code/os/tox/.tox/.tmp/package/78/tox-4.23.3.dev17+g9152d396.d20241210-py3-none-any.whl
3.10: commands[0]> pytest -k test_provision_requires_ok
================================================================= test session starts =================================================================
platform darwin -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0
cachedir: .tox/3.10/.pytest_cache
rootdir: /Users/ssbarnea/code/os/tox
configfile: pyproject.toml
testpaths: tests
plugins: cov-6.0.0, flaky-3.8.1, time-machine-2.16.0, devpi-server-6.14.0, anyio-4.7.0, mock-3.14.0, xdist-3.6.1
collected 1824 items / 1823 deselected / 1 selected
tests/test_provision.py E [100%]
======================================================================= ERRORS ========================================================================
____________________________________________________ ERROR at setup of test_provision_requires_ok _____________________________________________________
tox_wheel = PosixPath('/private/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105/dist0/tox-4.23.4-py3-none-any.whl')
tmp_path_factory = TempPathFactory(_given_basetemp=None, _trace=<pluggy._tracing.TagTracerSub object at 0x101e226b0>, _basetemp=PosixPath...rs/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105'), _retention_count=3, _retention_policy='failed')
@pytest.fixture(scope="session")
def tox_wheels(tox_wheel: Path, tmp_path_factory: TempPathFactory) -> list[Path]:
with elapsed("acquire dependencies for current tox"): # takes around 1.5s if already cached
result: list[Path] = [tox_wheel]
info = tmp_path_factory.mktemp("info")
with ZipFile(str(tox_wheel), "r") as zip_file:
zip_file.extractall(path=info)
dist_info = next((i for i in info.iterdir() if i.suffix == ".dist-info"), None)
if dist_info is None: # pragma: no cover
msg = f"no tox.dist-info inside {tox_wheel}"
raise RuntimeError(msg)
distribution = Distribution.at(dist_info)
wheel_cache = ROOT / ".wheel_cache" / f"{sys.version_info.major}.{sys.version_info.minor}"
wheel_cache.mkdir(parents=True, exist_ok=True)
cmd = [sys.executable, "-I", "-m", "pip", "download", "-d", str(wheel_cache)]
assert distribution.requires is not None
for req in distribution.requires:
requirement = Requirement(req)
if not requirement.extras: # pragma: no branch # we don't need to install any extras (tests/docs/etc)
cmd.append(req)
> check_call(cmd)
cmd = ['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', ...]
dist_info = PosixPath('/private/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105/info0/tox-4.23.4.dist-info')
distribution = <importlib.metadata.PathDistribution object at 0x1046ae8c0>
info = PosixPath('/private/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105/info0')
req = "pytest>=8.3.3; extra == 'test'"
requirement = <Requirement('pytest>=8.3.3; extra == "test"')>
result = [PosixPath('/private/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105/dist0/tox-4.23.4-py3-none-any.whl')]
tmp_path_factory = TempPathFactory(_given_basetemp=None, _trace=<pluggy._tracing.TagTracerSub object at 0x101e226b0>, _basetemp=PosixPath...rs/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105'), _retention_count=3, _retention_policy='failed')
tox_wheel = PosixPath('/private/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pytest-of-ssbarnea/pytest-105/dist0/tox-4.23.4-py3-none-any.whl')
wheel_cache = PosixPath('/Users/ssbarnea/code/os/tox/.wheel_cache/3.10')
zip_file = <zipfile.ZipFile [closed]>
tests/test_provision.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
popenargs = (['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', ...],), kwargs = {}, retcode = 1
cmd = ['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', ...]
def check_call(*popenargs, **kwargs):
"""Run command with arguments. Wait for command to complete. If
the exit code was zero then return, otherwise raise
CalledProcessError. The CalledProcessError object will have the
return code in the returncode attribute.
The arguments are the same as for the call function. Example:
check_call(["ls", "-l"])
"""
retcode = call(*popenargs, **kwargs)
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
> raise CalledProcessError(retcode, cmd)
E subprocess.CalledProcessError: Command '['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', '/Users/ssbarnea/code/os/tox/.wheel_cache/3.10', 'cachetools>=5.5', 'chardet>=5.2', 'colorama>=0.4.6', 'filelock>=3.16.1', 'packaging>=24.2', 'platformdirs>=4.3.6', 'pluggy>=1.5', 'pyproject-api>=1.8', "tomli>=2.1; python_version < '3.11'", "typing-extensions>=4.12.2; python_version < '3.11'", 'virtualenv>=20.27.1', "devpi-process>=1.0.2; extra == 'test'", "pytest-mock>=3.14; extra == 'test'", "pytest>=8.3.3; extra == 'test'"]' returned non-zero exit status 1.
cmd = ['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', ...]
kwargs = {}
popenargs = (['/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3', '-I', '-m', 'pip', 'download', '-d', ...],)
retcode = 1
/opt/homebrew/Cellar/python@3.10/3.10.16/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py:369: CalledProcessError
---------------------------------------------------------------- Captured stdout setup ----------------------------------------------------------------
done in 0.43799304217100143s acquire current tox wheel
done in 0.057830207981169224s acquire dependencies for current tox
---------------------------------------------------------------- Captured stderr setup ----------------------------------------------------------------
/Users/ssbarnea/code/os/tox/.tox/3.10/bin/python3: No module named pip
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
```console
```
| open | 2024-12-10T17:31:01Z | 2025-01-21T19:22:31Z | https://github.com/tox-dev/tox/issues/3468 | [
"feature:new",
"help:wanted"
] | ssbarnea | 4 |
huggingface/diffusers | deep-learning | 10,467 | FLUX.1-dev FP8 Example Code: tmpxft_00000788_00000000-10_fp8_marlin.cudafe1.cpp | ### Describe the bug
Unable to inference using Flux FP8
Logs
[FP8_logs.txt](https://github.com/user-attachments/files/18314458/FP8_logs.txt)
### Reproduction
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux#single-file-loading-for-the-fluxtransformer2dmodel
```
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.quanto import freeze, qfloat8, quantize
bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=3.5,
output_type="pil",
num_inference_steps=20,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux-fp8-dev.png")
```
### Logs
```shell
Attached logs
```
### System Info
Windows 11
```
(venv) C:\ai1\diffuser_t2i>python --version
Python 3.10.11
(venv) C:\ai1\diffuser_t2i>echo %CUDA_PATH%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
```
```
(venv) C:\ai1\diffuser_t2i>pip list
Package Version
------------------ ------------
accelerate 1.2.1
aiofiles 23.2.1
annotated-types 0.7.0
anyio 4.7.0
certifi 2024.12.14
charset-normalizer 3.4.1
click 8.1.8
colorama 0.4.6
diffusers 0.33.0.dev0
einops 0.8.0
exceptiongroup 1.2.2
fastapi 0.115.6
ffmpy 0.5.0
filelock 3.16.1
fsspec 2024.12.0
gguf 0.13.0
gradio 5.9.1
gradio_client 1.5.2
h11 0.14.0
httpcore 1.0.7
httpx 0.28.1
huggingface-hub 0.25.2
idna 3.10
imageio 2.36.1
imageio-ffmpeg 0.5.1
importlib_metadata 8.5.0
Jinja2 3.1.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
mdurl 0.1.2
mpmath 1.3.0
networkx 3.4.2
ninja 1.11.1.3
numpy 2.2.1
opencv-python 4.10.0.84
optimum-quanto 0.2.6
orjson 3.10.13
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 23.0.1
protobuf 5.29.2
psutil 6.1.1
pydantic 2.10.4
pydantic_core 2.27.2
pydub 0.25.1
Pygments 2.18.0
python-dateutil 2.9.0.post0
python-multipart 0.0.20
pytz 2024.2
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
ruff 0.8.6
safehttpx 0.1.6
safetensors 0.5.0
semantic-version 2.10.0
sentencepiece 0.2.0
setuptools 65.5.0
shellingham 1.5.4
six 1.17.0
sniffio 1.3.1
starlette 0.41.3
sympy 1.13.1
tokenizers 0.21.0
tomlkit 0.13.2
torch 2.5.1+cu124
torchvision 0.20.1+cu124
tqdm 4.67.1
transformers 4.47.1
typer 0.15.1
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.3.0
uvicorn 0.34.0
websockets 14.1
wheel 0.45.1
zipp 3.21.0
```
### Who can help?
You tell me who will help me resolve this issue :) | closed | 2025-01-06T06:42:14Z | 2025-01-06T16:58:21Z | https://github.com/huggingface/diffusers/issues/10467 | [
"bug"
] | nitinmukesh | 4 |
geex-arts/django-jet | django | 349 | __init__() missing 1 required positional argument: 'sortable_by' | I find a bug.
In jet/utils.py line 223

==============
this is true.

| open | 2018-08-28T06:37:28Z | 2019-08-13T14:51:47Z | https://github.com/geex-arts/django-jet/issues/349 | [] | charloai | 16 |
stanfordnlp/stanza | nlp | 1,229 | FInd POS of a sentence that is enclosed between a delimeter | I need to find the pos certain word from a sentence.
For example: sentence = "For all models ~ITE+CKAD~ obtains the highest fluency of 1.68 and ~ITE+DD~ has the highest Knowledge Relevance of 0.56 and highest Context Coherence of 0.90"
From the above sentence, I need to find the pos before and after the word that is enclosed by ~ delimiter, I need to find pos of the bolded words from the same sentence.
sentence = "For all **models** ~ITE+CKAD~ **obtains** the highest fluency of 1.68 **and** ~ITE+DD~ **has** the highest Knowledge Relevance of 0.56 and highest Context Coherence of 0.90"
The issue in finding pos is the special character between the words,
When I pass the sentence into the model ~ITE+CKAD~ is separated into ITE, +, CKAD which is making me track pos of the adjacent words.
What is want is how can I make a model to consider tokens as words that are separated by whitespace, the meaning model should consider ~ITE+CKAD~ as a single token.
The end goal is to find the pos before and after of the text that is enclosed by ^ delimiter.
Any other best approach is also appreciated
Note: there can be any special character then + like - or _ .
Thank you for your help in advance.
| closed | 2023-03-30T06:35:51Z | 2023-04-28T00:19:30Z | https://github.com/stanfordnlp/stanza/issues/1229 | [
"question"
] | kushal-h | 2 |
ageitgey/face_recognition | python | 1,411 | How to not draw boxes around unknown faces | * face_recognition version: 1.3.0
* Python version: 3.9.7
* Operating System: Big Sur MacOs
### Description
I am using the KNN classifier contained inside the file named "face_recognition_knn.py"
Inside the "train" folder, I've included and trained only one person.
Inside the "test" folder, I have one image containing the trained subject among multiple people.
The computer draws a box around everyone's face, including unknown people.
How do I tell the computer not to draw a box around unknown people?
Sorry. Noob question here.
### What I Did
This is literally the exact same code in the original file "face_recognition_knn.py"
```
import math
from sklearn import neighbors
import os
import os.path
import pickle
from PIL import Image, ImageDraw
import face_recognition
from face_recognition.face_recognition_cli import image_files_in_folder
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'}
def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False):
"""
Trains a k-nearest neighbors classifier for face recognition.
:param train_dir: directory that contains a sub-directory for each known person, with its name.
(View in source code to see train_dir example tree structure)
Structure:
<train_dir>/
├── <person1>/
│ ├── <somename1>.jpeg
│ ├── <somename2>.jpeg
│ ├── ...
├── <person2>/
│ ├── <somename1>.jpeg
│ └── <somename2>.jpeg
└── ...
:param model_save_path: (optional) path to save model on disk
:param n_neighbors: (optional) number of neighbors to weigh in classification. Chosen automatically if not specified
:param knn_algo: (optional) underlying data structure to support knn.default is ball_tree
:param verbose: verbosity of training
:return: returns knn classifier that was trained on the given data.
"""
X = []
y = []
# Loop through each person in the training set
for class_dir in os.listdir(train_dir):
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
# Loop through each training image for the current person
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
image = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.face_locations(image)
if len(face_bounding_boxes) != 1:
# If there are no people (or too many people) in a training image, skip the image.
if verbose:
print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face"))
else:
# Add face encoding for current image to the training set
X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0])
y.append(class_dir)
# Determine how many neighbors to use for weighting in the KNN classifier
if n_neighbors is None:
n_neighbors = int(round(math.sqrt(len(X))))
if verbose:
print("Chose n_neighbors automatically:", n_neighbors)
# Create and train the KNN classifier
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
knn_clf.fit(X, y)
# Save the trained KNN classifier
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
return knn_clf
def predict(X_img_path, knn_clf=None, model_path=None, distance_threshold=0.6):
"""
Recognizes faces in given image using a trained KNN classifier
:param X_img_path: path to image to be recognized
:param knn_clf: (optional) a knn classifier object. if not specified, model_save_path must be specified.
:param model_path: (optional) path to a pickled knn classifier. if not specified, model_save_path must be knn_clf.
:param distance_threshold: (optional) distance threshold for face classification. the larger it is, the more chance
of mis-classifying an unknown person as a known one.
:return: a list of names and face locations for the recognized faces in the image: [(name, bounding box), ...].
For faces of unrecognized persons, the name 'unknown' will be returned.
"""
if not os.path.isfile(X_img_path) or os.path.splitext(X_img_path)[1][1:] not in ALLOWED_EXTENSIONS:
raise Exception("Invalid image path: {}".format(X_img_path))
if knn_clf is None and model_path is None:
raise Exception("Must supply knn classifier either thourgh knn_clf or model_path")
# Load a trained KNN model (if one was passed in)
if knn_clf is None:
with open(model_path, 'rb') as f:
knn_clf = pickle.load(f)
# Load image file and find face locations
X_img = face_recognition.load_image_file(X_img_path)
X_face_locations = face_recognition.face_locations(X_img)
# If no faces are found in the image, return an empty result.
if len(X_face_locations) == 0:
return []
# Find encodings for faces in the test iamge
faces_encodings = face_recognition.face_encodings(X_img, known_face_locations=X_face_locations)
# Use the KNN model to find the best matches for the test face
closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(X_face_locations))]
# Predict classes and remove classifications that aren't within the threshold
return [(pred, loc) if rec else ("unknown", loc) for pred, loc, rec in zip(knn_clf.predict(faces_encodings), X_face_locations, are_matches)]
def show_prediction_labels_on_image(img_path, predictions):
"""
Shows the face recognition results visually.
:param img_path: path to image to be recognized
:param predictions: results of the predict function
:return:
"""
pil_image = Image.open(img_path).convert("RGB")
draw = ImageDraw.Draw(pil_image)
for name, (top, right, bottom, left) in predictions:
# Draw a box around the face using the Pillow module
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255))
# There's a bug in Pillow where it blows up with non-UTF-8 text
# when using the default bitmap font
name = name.encode("UTF-8")
# Draw a label with a name below the face
text_width, text_height = draw.textsize(name)
draw.rectangle(((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255))
draw.text((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255))
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
pil_image.show()
if __name__ == "__main__":
# STEP 1: Train the KNN classifier and save it to disk
# Once the model is trained and saved, you can skip this step next time.
print("Training KNN classifier...")
classifier = train("knn_examples/train", model_save_path="trained_knn_model.clf", n_neighbors=2)
print("Training complete!")
# STEP 2: Using the trained classifier, make predictions for unknown images
for image_file in os.listdir("knn_examples/test"):
full_file_path = os.path.join("knn_examples/test", image_file)
print("Looking for faces in {}".format(image_file))
# Find all people in the image using a trained classifier model
# Note: You can pass in either a classifier file name or a classifier model instance
predictions = predict(full_file_path, model_path="trained_knn_model.clf")
# Print results on the console
for name, (top, right, bottom, left) in predictions:
print("- Found {} at ({}, {})".format(name, left, top))
# Display results overlaid on an image
show_prediction_labels_on_image(os.path.join("knn_examples/test", image_file), predictions)
```
| closed | 2022-02-16T19:09:19Z | 2022-02-17T18:50:53Z | https://github.com/ageitgey/face_recognition/issues/1411 | [] | peoplecure | 1 |
betodealmeida/shillelagh | sqlalchemy | 190 | Unable to load adapter gsheetsapi | Thank you for this very useful library. I have a simple streamlit python app in which shillelagh fails because gsheetsapi adapter is not getting loaded. There is probably some similarity to a past issue https://github.com/betodealmeida/shillelagh/issues/112 so I set the version to 1.0.1, but without success.
requirements.txt
```
streamlit
shillelagh[gsheetaspi]==1.0.1
pandas
```
Error message:
```
File "/app/gsheetdb_tutorial/math_db_ui.py", line 52, in establish_connection
"gsheetsapi": {"service_account_file": service_account_file}})
File "/home/appuser/venv/lib/python3.7/site-packages/shillelagh/backends/apsw/db.py", line 517, in connect
adapter_kwargs = {mapping[k]: v for k, v in adapter_kwargs.items()}
File "/home/appuser/venv/lib/python3.7/site-packages/shillelagh/backends/apsw/db.py", line 517, in <dictcomp>
adapter_kwargs = {mapping[k]: v for k, v in adapter_kwargs.items()}
KeyError: 'gsheetsapi'
```
Can you please help? | closed | 2022-03-06T10:58:03Z | 2022-03-08T06:05:43Z | https://github.com/betodealmeida/shillelagh/issues/190 | [] | code-anjali | 4 |
pytest-dev/pytest-qt | pytest | 487 | Access violation in `waitSignal` with temporary object | This test causes an access violation.
```python
from PySide6.QtCore import QObject, Signal
class Signaller(QObject):
signal = Signal()
def test_should_pass_but_gives_access_violation(qtbot):
qtbot.waitSignal(Signaller().signal, timeout=1)
```
Windows 10 21H2
pytest==7.3.1
pytest-qt==4.2.0
PySide6-Essentials==6.5.0 | closed | 2023-05-14T17:17:37Z | 2023-05-16T20:15:50Z | https://github.com/pytest-dev/pytest-qt/issues/487 | [] | bersbersbers | 15 |
streamlit/streamlit | deep-learning | 10,756 | Custom components unmount and reload on switch_page | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hi! I have found that when I change pages with `st.switch_pages` , If I have a Custom component loaded, it's unmounted and remounted, and it's a little bit glitched way to have the component. So is very inconsistent the state of my component.
I have troubles to keep the state aligned to session. I would like to know if it's planned to have a better implementations, a more native use of Custom Components in streamlit, and try to mount the react component one time only in the user session.
I have an example with my custom navigation component in my repo. If run the demon with the checkbox "with native way", you can see the glitched unmount
https://github.com/quiradev/streamlit-plugins/tree/main/examples/components/navbar/native_streamlit_multipage
Thanks in advance.
### Reproducible Code Example
```Python
import sys
import streamlit as st
st.set_page_config(layout="wide")
if "logged_in" not in st.session_state:
st.session_state.logged_in = True
USER = "admin"
PASSWORD = "admin"
positions = ["top", "under", "side"]
def my_sidebar():
with st.sidebar:
st.write("Logged in:", st.session_state.logged_in)
position_mode = st.radio(
"Navbar position mode",
positions,
index=positions.index(st.session_state.get("position_mode", "side")),
key="position_mode_input",
)
sticky_nav = st.checkbox(
"Sticky navbar",
value=st.session_state.get("sticky_nav", True),
key="sticky_nav_input"
)
native_way = st.checkbox(
"Use native way",
value=st.session_state.get("native_way", True),
key="native_way_input"
)
st.session_state["position_mode"] = position_mode
st.session_state["sticky_nav"] = sticky_nav
st.session_state["native_way"] = native_way
def my_heading():
st.title("Streamlit Multi-Page App")
st.subheader("This is a multi-page app with a native Streamlit navbar.")
st.markdown("> But only vizualize well with navbar on `top` position")
def login():
_, col, _ = st.columns([2, 6, 2])
with col:
with st.form(key="login_form"):
user = st.text_input("Username")
password = st.text_input("Password", type="password")
submitted = st.form_submit_button("Submit")
with st.expander("Psst! Here's the login info"):
st.write(f"Username and Password is:")
st.markdown(f"""
```bash
{USER}
```
""")
if submitted:
if user == USER and password == PASSWORD:
st.session_state.logged_in = True
st_switch_home()
else:
st.toast("Invalid username or password", icon="❌")
def account():
st.write("Account page")
st.caption("This is a protected page. Only logged in users can view this.")
def settings():
st.button("Theme")
def logout():
st.session_state.logged_in = False
st.session_state.app_id = None
st.session_state.active_app_id = None
st.rerun()
st.logo(
image="https://streamlit.io/images/brand/streamlit-logo-primary-colormark-darktext.svg",
icon_image="https://streamlit.io/images/brand/streamlit-mark-color.png"
)
dashboard = st.Page("dashboard.py", title="Dashboard", icon=":material/dashboard:", default=True, url_path="dashboard")
login_page = st.Page(login, title="Log in", icon=":material/login:", url_path="login")
account_page = st.Page(account, title="Account", icon=":material/account_circle:", url_path="account")
settings_page = st.Page(settings, title="Settings", icon=":material/settings:", url_path="settings")
bugs = st.Page("reports/bugs.py", title="Bug reports", icon=":material/bug_report:", url_path="bugs")
bugs2 = st.Page("reports/bugs.py", title="Bug reports2", icon=":material/bug_report:", url_path="bugs2")
bugs3 = st.Page("reports/bugs.py", title="Bug reports3", icon=":material/bug_report:", url_path="bugs3")
alerts = st.Page("reports/alerts.py", title="System alerts", icon=":material/notification_important:", url_path="alerts")
search = st.Page("tools/search.py", title="Search", icon=":material/search:", url_path="search")
history = st.Page("tools/history.py", title="History", icon=":material/history:", url_path="history")
logout_page = st.Page(logout, title="Log out", icon=":material/logout:", url_path="logout")
# HERE IS THE CHANGE
from streamlit_plugins.components.navbar import st_navbar, build_menu_from_st_pages, NavbarPositionType, st_navigation, st_switch_home
my_sidebar()
position_mode: NavbarPositionType = st.session_state.get("position_mode", "top")
sticky_nav = st.session_state.get("sticky_nav", True)
native_way = st.session_state.get("native_way", False)
if st.session_state.logged_in:
if position_mode == "top":
my_heading()
page = st_navigation(
{
"": [dashboard],
"Reports": [alerts],
"Tools": [search, history, bugs, bugs2, bugs3]
},
section_info={
"Reports": {"icon": ":material/assessment:"},
"Tools": {"icon": ":material/extension:"}
},
position_mode=position_mode if st.session_state.logged_in else "hidden", sticky_nav=sticky_nav,
login_page=login_page, logout_page=logout_page,
account_page=account_page,
settings_page=settings_page,
native_way=native_way
)
if st.session_state.logged_in:
# SOME TEXT ABOVE THE NAVBAR
page.run()
else:
login_page._can_be_called = True
login_page.run()```
### Steps To Reproduce
Use Custom component with st.navigation, and when change page the component change but other streamlit components if are common in multiple pages are preserved, because the html dont change only the custom components are removed.
### Expected Behavior
Keep mounted the custom components dont removed or unmounted.
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.11
- Operating System: Windows
- Browser: Chrome
### Additional Information
 | open | 2025-03-12T20:31:14Z | 2025-03-18T20:30:58Z | https://github.com/streamlit/streamlit/issues/10756 | [
"type:enhancement",
"status:awaiting-user-response"
] | vquilon | 4 |
FactoryBoy/factory_boy | django | 462 | The docs mention a ``_next_sequence`` attribute which doesn't exist | There is no such property as `_next_sequence` (http://factoryboy.readthedocs.io/en/latest/reference.html#factory.Factory.reset_sequence)
```
type object 'UserFactory' has no attribute '_next_sequence'
```
Am I missing something here?
EDIT: Ignore my original issue, however, the second part of it (this property missing) still stands | closed | 2018-03-21T07:27:08Z | 2018-10-15T05:42:18Z | https://github.com/FactoryBoy/factory_boy/issues/462 | [
"Doc"
] | fgblomqvist | 2 |
QingdaoU/OnlineJudge | django | 38 | 有没有删除小组的功能? | closed | 2016-05-10T07:01:23Z | 2016-05-10T11:37:56Z | https://github.com/QingdaoU/OnlineJudge/issues/38 | [] | Ir1d | 1 | |
polakowo/vectorbt | data-visualization | 189 | ReferenceError: underlying object has vanished | Hi,
I am having this weird problem where i get this error when I try to run vbt once, but it works on second attempt:
`pf = vbt.Portfolio.from_signals(df.close, df.entries, df.exits)`
Error:
```
Traceback (most recent call last):
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 479, in save
data_name = overloads[key]
KeyError: ((UniTuple(int64 x 2), array(float64, 2d, C), array(int32, 1d, C), readonly array(float64, 1d, C), array(int32, 2d, C), array(float64, 1d, C), array(float64, 1d, C), array(float64, 0d, C), array(float64, 0d, C), array(int32, 0d, C), array(int32, 0d, C), array(float64, 0d, C), array(int32, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(bool, 0d, C), array(bool, 0d, C), array(bool, 0d, C), array(bool, 0d, C), array(bool, 0d, C), array(int32, 0d, C), array(bool, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(float64, 0d, C), array(bool, 0d, C), array(float64, 0d, C), array(int32, 0d, C), array(int32, 0d, C), array(int32, 0d, C), array(int32, 0d, C), array(int32, 0d, C), type(CPUDispatcher(<function no_adjust_sl_func_nb at 0x000001D06ECAEE58>)), Tuple(), type(CPUDispatcher(<function no_adjust_tp_func_nb at 0x000001D06ECC7708>)), Tuple(), bool, bool, bool, bool, int64, int64, bool), ('x86_64-pc-windows-msvc', 'skylake', '+64bit,+adx,+aes,+avx,+avx2,-avx512bf16,-avx512bitalg,-avx512bw,-avx512cd,-avx512dq,-avx512er,-avx512f,-avx512ifma,-avx512pf,-avx512vbmi,-avx512vbmi2,-avx512vl,-avx512vnni,-avx512vpopcntdq,+bmi,+bmi2,-cldemote,+clflushopt,-clwb,-clzero,+cmov,+cx16,+cx8,-enqcmd,+f16c,+fma,-fma4,+fsgsbase,+fxsr,-gfni,+invpcid,-lwp,+lzcnt,+mmx,+movbe,-movdir64b,-movdiri,-mwaitx,+pclmul,-pconfig,-pku,+popcnt,-prefetchwt1,+prfchw,-ptwrite,-rdpid,+rdrnd,+rdseed,+rtm,+sahf,-sgx,-sha,-shstk,+sse,+sse2,+sse3,+sse4.1,+sse4.2,-sse4a,+ssse3,-tbm,-vaes,-vpclmulqdq,-waitpkg,-wbnoinvd,-xop,+xsave,+xsavec,+xsaveopt,+xsaves'), ('0f0f2548aca57509e5e0bbe5e6f34487985e9f37bae572ebbe794f9d6f5c3bbe', 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Anaconda3\envs\edge\lib\site-packages\IPython\core\interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-a3d49dcdedf1>", line 1, in <module>
pf = vbt.Portfolio.from_signals(df.close, df.entries, df.exits) # Backtest in sample entries and exits
File "D:\Anaconda3\envs\edge\lib\site-packages\vectorbt\portfolio\base.py", line 1295, in from_signals
close.ndim == 2
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\dispatcher.py", line 439, in _compile_for_args
raise e
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\dispatcher.py", line 372, in _compile_for_args
return_val = self.compile(tuple(argtypes))
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\dispatcher.py", line 915, in compile
self._cache.save_overload(sig, cres)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 661, in save_overload
self._save_overload(sig, data)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 671, in _save_overload
self._cache_file.save(key, data)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 488, in save
self._save_index(overloads)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 532, in _save_index
data = self._dump(data)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\caching.py", line 560, in _dump
return pickle.dumps(obj, protocol=-1)
File "C:\Users\john\AppData\Roaming\Python\Python37\site-packages\numba\core\types\functions.py", line 473, in __getnewargs__
raise ReferenceError("underlying object has vanished")
ReferenceError: underlying object has vanished
```
Running it again, it works?
| closed | 2021-07-09T15:45:27Z | 2021-07-09T16:11:34Z | https://github.com/polakowo/vectorbt/issues/189 | [] | jmrichardson | 2 |
iterative/dvc | data-science | 9,915 | `exp run --allow-missing`: runs stage unexpectedly | # Bug Report
## Description
`dvc exp run --allow-missing` runs stages even though they are unchanged other than missing files.
### Reproduce
```console
$ git clone git@github.com:iterative/example-get-started.git
$ cd example-get-started
$ dvc exp run --allow-missing -S train.min_split=0.005
Reproducing experiment 'plump-leak'
'data/data.xml.dvc' didn't change, skipping
Stage 'prepare' didn't change, skipping
WARNING: 'data/prepared' is empty.
WARNING: 'data/prepared' is empty.
Running stage 'featurize':
> python src/featurization.py data/prepared data/features
Traceback (most recent call last):
File "/private/tmp/example-get-started/src/featurization.py", line 136, in <module>
main()
File "/private/tmp/example-get-started/src/featurization.py", line 120, in main
generate_and_save_train_features(
File "/private/tmp/example-get-started/src/featurization.py", line 58, in generate_and_save_train_features
df_train = get_df(train_input)
^^^^^^^^^^^^^^^^^^^
File "/private/tmp/example-get-started/src/featurization.py", line 14, in get_df
df = pd.read_csv(
^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 912, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 577, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1407, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1661, in _make_engine
self.handles = get_handle(
^^^^^^^^^^^
File "/Users/dave/micromamba/envs/example-get-started/lib/python3.11/site-packages/pandas/io/common.py", line 868, in get_handle
handle = open(handle, ioargs.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'data/prepared/train.tsv'
ERROR: failed to reproduce 'featurize': failed to run: python src/featurization.py data/prepared data/features, exited with 1
```
### Expected
DVC should be skipping `featurize` since nothing changed in this stage.
DVC generates empty `data/prepared` and `data/features` dirs, which might be part of the problem?
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.17.1.dev6+ge2aedb7e9
-----------------------------------
Platform: Python 3.11.4 on macOS-13.5-arm64-arm-64bit
Subprojects:
dvc_data = 2.8.1
dvc_objects = 0.24.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
azure (adlfs = 2023.4.0, knack = 0.11.0, azure-identity = 1.13.0),
gdrive (pydrive2 = 1.16.1),
gs (gcsfs = 2023.6.0),
hdfs (fsspec = 2023.6.0, pyarrow = 12.0.1),
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
oss (ossfs = 2021.8.0),
s3 (s3fs = 2023.6.0, boto3 = 1.26.161),
ssh (sshfs = 2023.7.0),
webdav (webdav4 = 0.9.8),
webdavs (webdav4 = 0.9.8),
webhdfs (fsspec = 2023.6.0)
Config:
Global: /Users/dave/Library/Application Support/dvc
System: /Library/Application Support/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: https
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
Repo.site_cache_dir: /Library/Caches/dvc/repo/d009e3e973ba4fa60c5080b78a58592a
``` | closed | 2023-09-05T17:53:44Z | 2023-09-07T14:02:00Z | https://github.com/iterative/dvc/issues/9915 | [
"p1-important",
"A: experiments"
] | dberenbaum | 2 |
FlareSolverr/FlareSolverr | api | 384 | [yggtorrent] (testing) Exception (yggtorrent): [...] Only Chrome at revision rlatest is guaranteed to work. | ### Environment
* **FlareSolverr version**: 2.2.4
* **Last working FlareSolverr version**: 2.2.4
* **Operating system**: CentOS 7
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0
* **Are you using a proxy or VPN?** no
* **Are you using Captcha Solver:** no
* **If using captcha solver, which one:**
* **URL to test this issue:** https://www5.yggtorrent.la/
### Description
Installed FlareSolver from docker
Installed Jackett from docker
Configured Jackett to use FlareSolver
Configured Jackett to use ygg using the user/password auth
Worked a week agggo
Now, when clicking the test button I got the error message
### Logged Error Messages
> 2022-04-25T14:14:52+02:00 INFO REQ-203 Cloudflare detected
> 2022-04-25T14:15:23+02:00 INFO REQ-203 Challenge solved
> 2022-04-25T14:16:02+02:00 INFO REQ-204 Incoming request => POST /v1 body: {"maxTimeout":55000,"cmd":"request.get","url":"https://www5.yggtorrent.la/engine/search?category=2140&name=&description=&file=&uploader=&sub_category=&do=search&order=desc&sort=publish_date"}
> 2022-04-25T14:16:08+02:00 WARN REQ-204 Page not reloaded (do not report!): Cause: TimeoutError: Navigation timeout of 18333.333333333332 ms exceeded
> 2022-04-25T14:16:33+02:00 ERROR REQ-204 Unexpected error: ProtocolError: Protocol error (Runtime.callFunctionOn): Target closed.
> 2022-04-25T14:16:44+02:00 ERROR REQ-204 TimeoutError: Timed out after 40000 ms while trying to connect to the browser! Only Chrome at revision rlatest is guaranteed to work.
> 2022-04-25T14:16:44+02:00 INFO REQ-204 Response in 42.246 s
> 2022-04-25T14:16:45+02:00 ERROR REQ-204 Error: Unable to process browser request. Error: Maximum timeout reached. maxTimeout=55000 (ms)
> 2022-04-25T14:16:45+02:00 INFO REQ-204 Response in 152.614 s
2022-04-25T16:01:38+02:00 WARN REQ-213 Page not reloaded (do not report!): Cause: Error: Navigation failed because browser has disconnected!
2022-04-25T16:01:38+02:00 ERROR REQ-213 Unexpected error: Error: Protocol error (Network.getCookies): Session closed. Most likely the page has been closed.
### Screenshots

| closed | 2022-04-25T12:24:02Z | 2022-07-30T22:10:28Z | https://github.com/FlareSolverr/FlareSolverr/issues/384 | [] | vipera7 | 3 |
scikit-optimize/scikit-optimize | scikit-learn | 626 | Algorithm may not converge? | I'm using the package to do hyper parameter optimisation of a 3-layer neural network. When calling
> res_gp = gp_minimize(train_bayes, space, n_calls=200, n_random_starts=20, acq_func='EIps',
n_jobs=-1, verbose=True, random_state=random_seed)
and plotting the res_gp object, the following plot suggests that the optimisation is indeed working

but then this plot raises some eyebrows:

calls this into question.
If I am not mistaken (see also issue #624 ), lighter colours correspond to minima, yet the algorithm seems to try to stay away from minimal regions? Is this a bug? | open | 2018-02-01T11:01:59Z | 2019-08-27T02:42:53Z | https://github.com/scikit-optimize/scikit-optimize/issues/626 | [] | bstemper | 16 |
nteract/papermill | jupyter | 482 | Papermill adding unusual syntax to HTML stored in JSON | Hi Friends.
I've been using papermill for a while to execute jupyter notebooks in a CI framework. Thank you for your work on it as it's been a fantastic tool.
I'm running into an odd (i think?) bug. When i run a notebook with an interactive element that will require an iframe - case in point creating a map using leaflet (folium for python), it seems to be adding some code that makes the output render incorrectly.
I just ran one of my notebooks. Prior to running papermill, the iframe embed looked like this:
The important piece here is that the `src=data:text/html` part is correctly populated in the json.
```
"<div style=\"width:100%;\"><div style=\"position:relative;width:100%;height:0;padding-bottom:60%;\"><iframe src=\"data:text/html;charset=utf-8;base64,
```
When i run papermill and create a new notebook - for some reason i get the following below: notice that the src now begins with about:blank.
```
"<div style=\"width:100%;\"><div style=\"position:relative;width:100%;height:0;padding-bottom:60%;\"><iframe src=\"about:blank\" style=\"position:absolute;width:100%;height:100%;left:0;top:0;border:none !important;\" data-html=PCFET0NUWVBFIGh0bWw+C
```
can anyone shed some light on why this is happening? it's happening with all of the notebooks that i run through papermill that use folium and have an iframe embed. I just udpated to version 2.0 from 1.2 just as a sanity check and it's still doing the same thing.
Here is an example of a rendered page where you can see the folium map iframes don't render properly. But i have traced this error back to running papermill as the JSON looks fine prior to running it. Any suggestions for fixing this are much appreciated!
https://www.earthdatascience.org/courses/scientists-guide-to-plotting-data-in-python/plot-spatial-data/customize-raster-plots/interactive-maps/
oh i am running papermill on a MAC with python as follows (i am adding this just in case i'm missing a parameter here):
`pm.execute_notebook(notebook, out_notebook)`
but our CI build is linux and the behavior is the same there.
many thanks | closed | 2020-03-23T23:44:20Z | 2020-04-20T16:06:51Z | https://github.com/nteract/papermill/issues/482 | [] | lwasser | 4 |
tox-dev/tox | automation | 2,524 | Dependency with "vulnerable" version of py | Hi all,
I couldn't find this reported yet (apologies if it's duplicate), but tox has a dependency with `py`, which is currently flagged as a vulnerability: https://nvd.nist.gov/vuln/detail/CVE-2022-42969 and therefore reported by tools like `safety` and `pip-audit`.
There is a lot of chatter in [here](https://github.com/pytest-dev/py/issues/287) about whether this should be considered a vulnerability in the first place and whether the vulnerability should be taken down. It doesn't sound like the `py` maintainers are going to _fix_ the affected code, instead they removed the dependency from `pytest` altogether by [vendoring](https://github.com/pytest-dev/pytest/pull/10396) the code they still needed.
Is this something that could be done in `tox` as well?
Thanks in advance!
| closed | 2022-11-01T18:10:39Z | 2022-11-01T18:28:09Z | https://github.com/tox-dev/tox/issues/2524 | [
"bug:normal"
] | juanitosvq | 3 |
stanfordnlp/stanza | nlp | 690 | evaluation of trained models | Hello! Thanks so much for your amazing library. I have trained all the processors on my Persian data based on the Model Training and Evaluation documentation. I would like to run the trained model on some test data and evaluate them. I also need to save the prediction files to manually go through them to check some specific cases. On this documentation, it says that we can use ```bash scripts/run_ete.sh ${corpus} ${split}``` to evaluate the full parsing pipeline. However, it seems that there is no such file as ```run_ete.sh``` in this repository. I was wondering if there is any update available on this part of the documentation. I very much appreciate your help. | closed | 2021-05-05T20:59:21Z | 2021-06-17T17:19:51Z | https://github.com/stanfordnlp/stanza/issues/690 | [
"question"
] | royakabiri | 4 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,850 | [MySQL] Nullability of generated columns is not inspected correctly | ### Describe the bug
The inspector does not pick up the nullability of generated columns for MySQL.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.22
### DBAPI (i.e. the database driver)
MySQL
### Database Vendor and Major Version
MySQL 8
### Python Version
Python 3.11
### Operating system
Linux
### To Reproduce
```python
import os
from sqlalchemy import sql, create_engine, inspect
# Create a table with a generated column in MySQL
engine = create_engine(os.environ["DB_URL"])
conn = engine.connect()
ddl = """CREATE TABLE IF NOT EXISTS mytable (
my_generated_column int GENERATED ALWAYS AS (1234) VIRTUAL NOT NULL
);"""
conn.execute(sql.text(ddl))
# Try inspecting the generated column
inspector = inspect(conn)
cols = inspector.get_columns("mytable")
cols = [col for col in cols if col["name"] == "my_generated_column"]
assert cols[0]["nullable"] == False
```
### Error
```
Traceback (most recent call last):
File "reproduction.py", line 19, in <module>
assert cols[0]["nullable"] == False
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
### Additional context
_No response_ | closed | 2024-01-09T13:27:00Z | 2024-01-23T18:17:51Z | https://github.com/sqlalchemy/sqlalchemy/issues/10850 | [
"bug",
"mysql",
"reflection",
"PRs (with tests!) welcome"
] | GeorchW | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 118 | what the expected output in results folder when visualize the ranks? | when visualizing the ranks I have gotten a probe folder and 20 gallery folder, there is more than one person identity in both the gallery and probe folder.
my question is: How can I make sure the results are correct? can you explain the expected results?
Thank you. | closed | 2019-03-01T17:17:43Z | 2019-03-18T16:12:57Z | https://github.com/KaiyangZhou/deep-person-reid/issues/118 | [] | muna-cs | 1 |
MagicStack/asyncpg | asyncio | 565 | Error when importing after compiling with GCC 10 | * **asyncpg version**: asyncpg-0.21.0.dev0+7f5c2a2 (same for 0.20.1)
* **PostgreSQL version**: does not matter
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: does not matter
* **Python version**: 3.8.2 and 3.9.0a5+
* **Platform**: Fedora 32 x64/aarch64 and Alpine Linux 3.12 alpha x64/aarch64
* **Do you use pgbouncer?**: does not matter
* **Did you install asyncpg with pip?**: no
* **If you built asyncpg locally, which version of Cython did you use?**: 0.29.16 and 3.0a2
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: does not matter
I have tested this on the following scenarios:
Fedora 32 x86_64, Python 3.8.2, GCC 10.0.1 20200328 (Red Hat 10.0.1-0.11) from official repos
Fedora 33 (rawhide) aarch64, Python 3.8.2, GCC 10.0.1 20200420 (Red Hat 10.0.1-0.12) from official repos
Alpine Linux 3.12 alpha x86_64, Python 3.9a5, GCC 10.0.1 20200427 from source (builds at https://ftp.travitia.xyz/alpine/x86_64/)
Alpine Linux 3.12 alpha aarch64, Python 3.9a5, GCC 10.0.1 20200426 from source (builds at https://ftp.travitia.xyz/alpine/aarch64/)
All have been tested twice with Cython 0.29.16 and Cython 3.0a2.
Whenever I compile asyncpg with GCC 9.3 on any of above scenarios, it compiles fine and runs fine.
Whenever I use GCC 10 in any of above scenarios, it *does build fine*, but importing it gives me:
```py
>>> import asyncpg
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jens/.local/lib/python3.8/site-packages/asyncpg/__init__.py", line 8, in <module>
from .connection import connect, Connection # NOQA
File "/home/jens/.local/lib/python3.8/site-packages/asyncpg/connection.py", line 19, in <module>
from . import connect_utils
File "/home/jens/.local/lib/python3.8/site-packages/asyncpg/connect_utils.py", line 28, in <module>
from . import protocol
File "/home/jens/.local/lib/python3.8/site-packages/asyncpg/protocol/__init__.py", line 8, in <module>
from .protocol import Protocol, Record, NO_TIMEOUT # NOQA
File "asyncpg/protocol/protocol.pyx", line 1, in init asyncpg.protocol.protocol
ImportError: /home/jens/.local/lib/python3.8/site-packages/asyncpg/pgproto/pgproto.cpython-38-x86_64-linux-gnu.so: undefined symbol: uuid_to_hex
```
This is weird, as readelf shows:
```sh
$ readelf -a /home/jens/.local/lib/python3.8/site-packages/asyncpg/pgproto/pgproto.cpython-38-x86_64-linux-gnu.so | grep uuid_to_hex
000000055d90 004f00000007 R_X86_64_JUMP_SLO 0000000000000000 uuid_to_hex + 0
79: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND uuid_to_hex
1176: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND uuid_to_hex
```
I don't know C much, but I have seen that uuid_to_hex is defined in the code for pgproto, so I have no clue how this happens.
FYI: On all scenarios, I am able to compile and use uvloop and cpython 3.9 without any errors.
EDIT: Same issue with Alpine Linux 3.12 alpha, Python 3.9a6, Cython 3.0a3 and GCC 10 20200430 | closed | 2020-04-28T15:33:44Z | 2020-06-26T18:20:55Z | https://github.com/MagicStack/asyncpg/issues/565 | [] | Gelbpunkt | 3 |
SciTools/cartopy | matplotlib | 1,630 | Crimea is wrongly moved to the Russia | Wanted to use your library to make some heatmaps on the country levels.
And found that you have wrong shapes for Ukraine and Russia, more precisely in your shapes, Crimea belongs to Russia.
This obviously contradicts international understanding. With this mistake, you hiddenly push all library users to break international laws on their graphs.
I hope you will fix this fast.
| closed | 2020-08-06T17:59:50Z | 2020-08-06T18:20:24Z | https://github.com/SciTools/cartopy/issues/1630 | [] | johngull | 2 |
wger-project/wger | django | 1,424 | Website terms of service are smpty | https://wger.de/en/software/terms-of-service
Is empty, no matter if German or English.
I told a guy on Reddit about this and how he might contribute and get exercises and he liked it, but mentioned, that the imprint and the tos are empty and this is something that did not sit well with them. | open | 2023-09-11T06:19:16Z | 2023-09-11T12:01:14Z | https://github.com/wger-project/wger/issues/1424 | [] | natrius | 1 |
open-mmlab/mmdetection | pytorch | 11,144 | How to generate coco fromat annotations on custom dataset | I rencently encounter a problem that I fail to generate coco format json file, depite I have set the `format_only=True` in the test_evaluator. I have checked the previous issues, and found several similiar questions with related functions removed. Threrefore, I wonder whether it is possible to provide a tutorial regarding this issue. Thank you so much in advance. | open | 2023-11-08T14:32:00Z | 2023-11-08T14:32:00Z | https://github.com/open-mmlab/mmdetection/issues/11144 | [] | CDchenlin | 0 |
sczhou/CodeFormer | pytorch | 373 | Question on Training Steps and Cross entropy Loss for Stage 2 | Your work is truly impressive! The clarity and depth you bring to the subject are remarkable!
Could you please let me know how many **training steps** are required in stage 2 and how small the **Cross Entropy Loss** should be to achieve normal images?
I use the pretrained VQGAN which provided by authors to start the training of stage-2, but it seem that it could generate a normal image. I have trained 70k step and batch size is 8, learning rate is 4e-5. following is my loss figure and results:


Please, kind-hearted person, help me. I don't know what the problem is. | open | 2024-05-23T09:13:04Z | 2025-02-17T13:15:30Z | https://github.com/sczhou/CodeFormer/issues/373 | [] | lxxie298 | 3 |
browser-use/browser-use | python | 214 | Outdated workflow actions | GitHub Actions workflow still uses checkout@v3 and Python@v3, these are outdated for a while now and new versions have been published | closed | 2025-01-11T19:01:07Z | 2025-01-19T23:53:30Z | https://github.com/browser-use/browser-use/issues/214 | [] | sushil959465 | 0 |
nvbn/thefuck | python | 1,425 | Support for Windows CMD | When I run "fuck" on windows cmd I get
"Seems like fuck alias isn't configured!"
But the help does not show how to configure thefuck for windows cmd, only for powershell.
The latest commit shows that there is support for windows cmd
https://github.com/nvbn/thefuck/commit/3cd187a3bb47351890ac7308464e1a2780507220
But I guess since the latest release is from January and the commit is from July a version that supports windows cmd wasn't released yet.
---
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
thefuck 3.32 for windows
Your system (Debian 7, ArchLinux, Windows, etc.):
Windows 11 CMD
How to reproduce the bug:
simply run fuck
| closed | 2023-12-06T07:49:43Z | 2023-12-12T07:28:14Z | https://github.com/nvbn/thefuck/issues/1425 | [] | BenjaminKobjolke | 4 |
Sanster/IOPaint | pytorch | 40 | New resizing bug introduced | https://user-images.githubusercontent.com/71787427/163864876-0ab01a33-4d0c-4d2f-9c0e-ee69606ab2a3.mp4
The image resizes itself auto zooms out when the mouse is released.Previously it was keeping the zoom level. It is doing it with multi stroke as well.
and this is how thee old version acts:
https://user-images.githubusercontent.com/71787427/163869321-ed254ddf-7229-4c47-9b75-d2046d5645f9.mp4
| closed | 2022-04-18T19:28:50Z | 2022-04-20T13:45:44Z | https://github.com/Sanster/IOPaint/issues/40 | [] | FlowDownTheRiver | 7 |
predict-idlab/plotly-resampler | data-visualization | 69 | no support for dict representation of plotly traces | The following code works in plotly, but does not work when wrapping the constructor with `FigureResampler`.
```py
trace = {
"type": "scatter",
# "x": [1, 2, 3],
"y": np.arange(5_000),
"name": "some_trace", # is not even required
}
fig = go.Figure() # wrapping this will fail
fig.add_trace(trace)
```
We can support dict representation on 2 places;
1) in the constructor of `AbstractFigureAggregator` (-> not really sure if we want to do this)
2) in the `add_trace` method -> our type hinting indicates that we support dict input, but this is clearly not the case | closed | 2022-06-07T19:00:07Z | 2022-06-17T11:58:11Z | https://github.com/predict-idlab/plotly-resampler/issues/69 | [
"bug"
] | jvdd | 1 |
AutoViML/AutoViz | scikit-learn | 104 | Inconsistent performance | I have a dataset with 6439 rows and 28 columns(5 string columns,1 datetime column,3 integers,19 float columns)
These are the installed requirements for `python 3.8.18`
```
bokeh 2.4.3
holoviews 1.14.9
hvplot 0.7.3
panel 0.14.4
```
When trying to create autoviz charts then it is creating only `data quality report`, `displots_nums.html`, and `pair_scatters.html` charts and gives the following error
```
underflow encountered in true_divide
```
when taking only 2000 rows of the same dataset, everything works fine. | closed | 2024-02-12T05:10:12Z | 2024-02-20T01:09:07Z | https://github.com/AutoViML/AutoViz/issues/104 | [] | ArijitSinghEDA | 4 |
twopirllc/pandas-ta | pandas | 532 | VWAP Bands request like IB | Which version are you running? The latest version is on Github. Pip is for major releases.
```
import pandas_ta as ta
print(ta.version)
0.3.64b0
```
**Indicator addition to match another broker**
Hi,
So kind of new around here, and I've been looking for indicator to match certain behavior of VWAP standard deviation bands like the bands that appears in Interactive brokers management portal .
I've been examining the indicator that has been recently added here [https://github.com/twopirllc/pandas-ta/pull/488](url)
and his behavior and results is pretty different from IB's one.
(BTW, while searching for this topic all around it seems that most of the formulas and libs are getting the same results as your indicator).
In the example attached, I pulled the data of a stock in 5 mins candles, calculated the VWAP and bands using your indicator, and got the same VWAP value, but very different bands values. I've been looking for 8 and 11 bands in the IB terminology and definition (example of the setting attached) but the results I'm getting are very different and it seems that the "issue" is with the calculation over time of the bands (again, pretty new to all of this).
I will appreciate help and will provide any data as I can!
Thanks!
DataFrame data:
[wfc_five_minutes_candles.zip](https://github.com/twopirllc/pandas-ta/files/8701973/wfc_five_minutes_candles.zip)
My code and plot:
```python
df = pd.read_json(json.dumps(data))
df.sort_values(by="datetime", inplace=True)
df.set_index(pd.DatetimeIndex(df["datetime"]), inplace=True)
vwap = df.ta.vwap(anchor="D", bands=[8, 11], offset=None, append=False)
last = 100
adp = mpf.make_addplot(vwap.tail(last), type='line')
mpf.plot(df, figratio=(10, 5), type='candle', addplot=adp, volume=True, style='yahoo')
```
plot:

Interactive indicator setup screen and values:

interactive plotting:

| open | 2022-05-16T17:13:49Z | 2022-05-19T22:42:45Z | https://github.com/twopirllc/pandas-ta/issues/532 | [
"enhancement",
"help wanted",
"good first issue"
] | rotemkama | 2 |
amdegroot/ssd.pytorch | computer-vision | 206 | EXCUSE ME! I'VE AN PROBLEM ABOUT THIS PROGRAM IN CPU | BECAUSE OF MY FXXKING PC,I'VE NO IDEA TO INSTALL NVIDIA DRIVER
HOWEVER WHEN I RAN TRAIN.PY, IT COMEOUT AN ERROR AT THE LINE"images, targets = next(batch_iterator)" FOR "AssertionError: Torch not compiled with CUDA enabled"
WHAT SHOULD I DO NOW?
THANKS FOR YOUR HELP | open | 2018-07-25T07:51:06Z | 2018-07-25T08:08:58Z | https://github.com/amdegroot/ssd.pytorch/issues/206 | [] | lc33851947 | 2 |
neuml/txtai | nlp | 372 | Add Cross-Encoder support to Similarity pipeline | It seems like Cross Encoders are the preferred model for doing Re-Ranking of search results that were generated by another means (BM25, vector search etc...). However, if I provide one of these models as the path, all the results just have scores of 0.5.
Sentence Transformers recommends doing this. https://www.sbert.net/examples/applications/retrieve_rerank/README.html
In particular, their msmarco-minilm models seem ideal as a default (maybe the L-6 version?) https://www.sbert.net/docs/pretrained-models/ce-msmarco.html
Haystack's implementation uses this in its Ranker node. https://haystack.deepset.ai/pipeline_nodes/ranker
| closed | 2022-10-19T02:57:35Z | 2022-10-26T11:25:16Z | https://github.com/neuml/txtai/issues/372 | [] | nickchomey | 7 |
mithi/hexapod-robot-simulator | plotly | 21 | Add preliminary tests | - [x] Page IK Random Pose
- [x] Kinematics Random Pose
- [x] Page Leg Pattern | closed | 2020-04-10T11:08:18Z | 2020-04-16T13:16:20Z | https://github.com/mithi/hexapod-robot-simulator/issues/21 | [
"PRIORITY",
"best practice"
] | mithi | 1 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 243 | Support for Apple sillicon | Hi,
Will there be support for arm64 architecture in the near future?
Offering it on top of linux/amd64 would be perfect. Thanks in advance! | closed | 2023-05-17T15:20:28Z | 2024-08-25T04:01:57Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/243 | [] | vranicf | 0 |
keras-team/keras | data-science | 20,421 | Add similarity loss functions for Keras 3? | Hello,
Is there any interest in allowing training on embeddings using similarity loss functions? Currently the situation is tensorflow similarity is not working with Keras 3, with the latest working tf working version 2.15, TF-Similarity repo does not look very active to hope for an update anytime soon.
I was thinking if there would be some interest, I can rewrite even a single similarity loss function that would open the door for such tasks to take place on keras 3. I checked KerasHub too but it does not seem it has any loss functions.
I can go with the newer [CircleLoss](https://github.com/tensorflow/similarity/blob/master/tensorflow_similarity/losses/circle_loss.py). | closed | 2024-10-28T15:49:50Z | 2024-11-06T20:14:06Z | https://github.com/keras-team/keras/issues/20421 | [
"type:feature"
] | ma7555 | 3 |
cvat-ai/cvat | computer-vision | 9,145 | Can't import model from HuggingFace | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Go to app.cvat.ai/models/create
2. add a link to a huggingface model (https://huggingface.co/Ultralytics/YOLOv5 yolov5 for example)
3. Try to add
4. get the next error

### Expected Behavior
I've tried with other models too and it doesn't work. This link is the same as in othe fficial CVAT tutorial https://www.cvat.ai/blog/integrating-hugging-face-and-roboflow-models
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | open | 2025-02-24T13:15:43Z | 2025-02-24T13:15:43Z | https://github.com/cvat-ai/cvat/issues/9145 | [
"bug"
] | cile98 | 0 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,571 | Automatic Let's Encrypt certificate renewal failing on python3-acme 2.1.0 | ### What version of GlobaLeaks are you using?
Latest globaleaks version (4.12.8) on Debian bookworm
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
Since Updating the repo to use the bookworm packages, we see the following error in our logs and we get an email, that renewal has failed:
```
2023-07-25 03:26:15+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-26 02:33:11+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-27 01:25:11+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-28 01:26:35+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-29 02:40:52+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-30 03:18:16+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-07-31 03:25:40+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-08-01 02:30:22+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-08-02 01:26:23+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
2023-08-03 02:35:04+0000 [-] [E] [1] Automatic HTTPS renewal failed ('NoneType' object has no attribute 'update')
```
### Proposed solution
_No response_ | closed | 2023-08-03T09:33:08Z | 2023-08-05T09:17:10Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3571 | [
"T: Bug",
"Triage"
] | DanWin | 10 |
sqlalchemy/alembic | sqlalchemy | 853 | Alembic not loading config member | We're working on a project which uses Postgresql as a data source which is then managed/initiated using SqlAlchemy to create models, and Alembic for migrations.
The issue is that Alembic cannot auto-generate any new migrations, nor can it upgrade the database to the latest iteration when using the Alembic command. When done programmatically the database version upgrades fine, but we're unable to generate new revisions. There are multiple errors that can show up depending on minor changes in alembics `env.py` script but essentially hey come down to issues with `context`s config member.
Here is the folder structure:
```
my-project
│ alembic.ini
│
└───alembic
│ │ env.py
│ │ README
│ │ script.py.mako
│ │
│ └───versions
│ │ 95ba8c1c3126_some_revision.py
│ │ 43700c722f83_other_revision.py
│ │ ...
```
here are the relevant information from `alembic.ini` (everything else is default)
```
[alembic]
script_location = alembic
.
.
.
prepend_sys_path = .
.
.
.
# set this in env.py so we don't need to commit the database password
sqlalchemy.url = NULL
```
and here is env.py:
```
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name, disable_existing_loggers=False)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from core.database.database import Base, get_database_url
config.set_main_option('sqlalchemy.url', get_database_url())
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
```
this is the error I receive when running something like
`alembic revision -m "Add maintenances" --autogenerate` from the same directory where 'alembic.ini' is located
error:
```
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/command.py", line 212, in revision
script_directory.run_env()
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/script/base.py", line 490, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file
module = load_module_py(module_id, path)
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "alembic/env.py", line 83, in <module>
if context.is_offline_mode():
File "<string>", line 8, in is_offline_mode
AttributeError: 'NoneType' object has no attribute 'is_offline_mode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/marco_rocco/.local/bin/alembic", line 8, in <module>
sys.exit(main())
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/config.py", line 559, in main
CommandLine(prog=prog).main(argv=argv)
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/config.py", line 553, in main
self.run_cmd(cfg, options)
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/config.py", line 530, in run_cmd
fn(
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/command.py", line 212, in revision
script_directory.run_env()
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/runtime/environment.py", line 110, in __exit__
self._remove_proxy()
File "/home/marco_rocco/.local/lib/python3.9/site-packages/alembic/util/langhelpers.py", line 50, in _remove_proxy
del globals_[attr_name]
KeyError: 'config'
``` | closed | 2021-06-04T09:15:03Z | 2021-08-24T07:11:38Z | https://github.com/sqlalchemy/alembic/issues/853 | [] | MarcoRocco | 0 |
widgetti/solara | jupyter | 606 | Bug in patching of rcParams | I'm running into the following failure over in glue-jupyter:
```
/usr/lib/python3.11/contextlib.py:137: in __enter__
return next(self.gen)
../../.tox/py311-test-visual/lib/python3.11/site-packages/pytest_mpl/plugin.py:318: in switch_backend
prev_backend = matplotlib.get_backend().lower()
../../.tox/py311-test-visual/lib/python3.11/site-packages/matplotlib/__init__.py:1275: in get_backend
return rcParams['backend']
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = RcParamsScoped({'_internal.classic_mode': True,
'agg.path.chunksize': 0,
'animation.bi... 'ytick.minor.visible': False,
'ytick.minor.width': 0.5,
'ytick.right': True})
key = 'backend'
def __getitem__(self, key):
> return self._get_context_dict().__getitem__(key)
E KeyError: 'backend'
```
basically it seems the backend key is missing from the patched rcParams. | closed | 2024-04-16T13:32:27Z | 2025-02-10T13:32:46Z | https://github.com/widgetti/solara/issues/606 | [] | astrofrog | 2 |
pyro-ppl/numpyro | numpy | 1,987 | [FR] Update the flax module to adapt flax new nnx api | ### Feature Summary
The fully featured net library Flax has launch a new simplified NNX api to "make it easier to create, inspect, debug, and analyze neural networks in JAX". However, numpyro still use old flax Linen api in the falx module. It is necessary to upgrade this module to keep up with the development of Flax (and may deprecate Haiku module).
### Why is this needed?
It seems the mainstream net libraries based on JAX have converged to Flax NNX (merged with Haiku), and in the long term the maintenance of old flax Linen api has uncertainty. So, this feature has 2 advantages:
1. keep up with the development of JAX net ecosystem;
2. simplify numpyro code base, i.e. consider merging flax and haiku module.
| closed | 2025-02-24T01:15:22Z | 2025-03-10T19:26:15Z | https://github.com/pyro-ppl/numpyro/issues/1987 | [
"enhancement"
] | sejabs | 1 |
snarfed/granary | rest-api | 232 | Using PixelFed reports 500 server error | Just passing along. Holler if I can help debug. Thanks!
Ray
GET https://granary.io/pixelfed/267478200693690368/@all/@app/?format=atom&access_token=...&user_id=267478200693690368&instance=https://pixelfed.social/
500 Internal Server Error
The server has either erred or is incapable of performing the requested operation.
| closed | 2021-02-26T02:48:04Z | 2021-02-26T05:25:54Z | https://github.com/snarfed/granary/issues/232 | [] | ScootRay | 1 |
graphql-python/graphene-sqlalchemy | graphql | 381 | How to remove "edges" and "node" from query? | I would like to be able to use syntax like this:

But instead I must do something like this:
```
{
pets {
edges {
node {
name
}
}
}
}
```
Another example:
> For e.g., now:
> query { authors { edges { node { id name books { edges { node { id name } } } } } } }
> What i want:
> query { authors { id name books { id name } } }
From another person wanted the same thing here: https://github.com/graphql-python/graphene-sqlalchemy/issues/374
Will keep poking at examples and the code to try to find a way. Which package is responsible for the edges-node syntax? Is it graphene-sqlalchemy or graphene itself? | closed | 2023-02-14T20:41:16Z | 2023-08-30T00:36:29Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/381 | [] | MattKleinsmith | 9 |
plotly/dash | flask | 2,776 | Add documentation/examples using CeleryManager as background_callback_manager together with Dash Pages | From the Dash doc, I'm able to create a simple, single-page Dash-app that uses CeleryManager. Please see main-branch in this repo:
https://github.com/roar-skinderviken/dash-celery-docker-compose
When trying to apply background_callback_manager/CeleryManager using Dash Pages, code stopped working, and I was not able to find any info (spent like two days). With DiskcacheManager, the code runs as expected. This pull request is showing what I've tried so far:
https://github.com/roar-skinderviken/dash-celery-docker-compose/pull/1
Hence, some doc/examples on how to use background_callback_manager/CeleryManager with Dash Pages would be nice. | closed | 2024-03-03T13:18:41Z | 2024-03-04T13:55:39Z | https://github.com/plotly/dash/issues/2776 | [] | roar-skinderviken | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,456 | [BUG] (`DataLoader`) sanity check fails due to `Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor)` | ### Bug description
Hi there! I have previously created my first `LightningDataModule`. More specifically, a `NonGeoDataModule` which inherits from there (see [torchgeo-fork](https://github.com/MathiasBaumgartinger/torchgeo/blob/main/torchgeo/datamodules/flair2.py). Interestingly, when I try to run this module I get `RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor`. Even more intersting is the fact, that if I override the `transfer_batch_to_device` like:
```py
def transfer_batch_to_device(self, batch: Any, device: torch.device, dataloader_idx: int) -> Any:
batch = super().transfer_batch_to_device(batch, device, dataloader_idx)
print("----------------------------------------")
for k in batch.keys(): print(k, batch[k][0].get_device())
print("----------------------------------------")
return batch
```
I get the output
> image 0
> mask 0
It happens during the validation step (lightning/pytorch/strategies/strategy.py", line 411).
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
def train(
config: dict,
data_dir: str=default_data_dir,
root_dir: str=default_root_dir,
min_epochs: int=1,
max_epochs: int=25) -> None:
tune_metrics = {"loss": "ptl/val_loss", "acc": "ptl/val_accuracy"}
module = FL(
num_workers=config["num_workers"],
batch_size=config["batch_size"],
patch_size=config["patch_size"],
val_split_pct=0.25,
use_toy=True,
#augs=transforms,
root=data_dir,
)
task = SemanticSegmentationTask(
model="unet",
backbone="resnet50",
ignore_index=255,
in_channels=5,#(5+3), #appended indices
num_classes=13,
lr=config["lr"],
patience=config["lr_patience"]
)
# Callbacks
checkpoint_callback = ModelCheckpoint(monitor="val_loss", save_top_k=1, mode="min")
lr_monitor = LearningRateMonitor(logging_interval="step")
tune_callback = TuneReportCheckpointCallback(
{"loss": "val_loss", "accuracy": "val_accuracy"}, on="validation_end"
)
logger = TensorBoardLogger(save_dir=root_dir, name="FLAIR2logs")
trainer = Trainer(
accelerator=accelerator,
num_nodes=1,
callbacks=[checkpoint_callback, lr_monitor, tune_callback],
log_every_n_steps=1,
logger=logger,
min_epochs=1,
max_epochs=25,
precision=32,
)
trainer.fit(model=task, datamodule=module)
```
### Error messages and logs
```
Traceback (most recent call last):
File "//Dev/forks/torchgeo/train_simple.py", line 158, in <module>
main()
File "//Dev/forks/torchgeo/train_simple.py", line 154, in main
train(config)
File "//Dev/forks/torchgeo/train_simple.py", line 151, in train
trainer.fit(model=task, datamodule=module)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
call._call_and_handle_interrupt(
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
results = self._run_stage()
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1023, in _run_stage
self._run_sanity_check()
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1052, in _run_sanity_check
val_loop.run()
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py", line 178, in _decorator
return loop_run(self, *args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 135, in run
self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 396, in _evaluation_step
output = call._call_strategy_hook(trainer, hook_name, *step_args)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 319, in _call_strategy_hook
output = fn(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 411, in validation_step
return self.lightning_module.validation_step(*args, **kwargs)
File "//Dev/forks/torchgeo/torchgeo/trainers/segmentation.py", line 251, in validation_step
y_hat = self(x)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "//Dev/forks/torchgeo/torchgeo/trainers/base.py", line 81, in forward
return self.model(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/segmentation_models_pytorch/base/model.py", line 38, in forward
features = self.encoder(x)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/segmentation_models_pytorch/encoders/resnet.py", line 63, in forward
x = stages[i](x)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/container.py", line 219, in forward
input = module(input)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 458, in forward
return self._conv_forward(input, self.weight, self.bias)
File "//miniforge3/envs/torchgeo/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 454, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
```
### Environment
<details>
<summary>Current environment</summary>
```
-----------------------------------------------------------
Python Version: 3.10.4
PyTorch Version: 2.4.1
Cuda is available version: 12.4
Torch built with CUDA: True
cuDNN Version: 90100
cuDNN Enabled: True
cuDNN available: True
Device: cuda
Accelerator: gpu
lightning 2.4.0
lightning-utilities 0.11.9
pytorch-lightning 2.4.0
## conda env
name: torchgeo
channels:
- pytorch
- nvidia
- conda-forge
- defaults
dependencies:
- python=3.10
- pytorch-cuda=12.4
- pytorch=2.4
- torchgeo=0.6.0
- tensorboard=2.17
-----------------------------------------------------------
```
</details>
### More info
_No response_ | open | 2024-11-27T17:16:35Z | 2025-01-07T10:59:42Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20456 | [
"3rd party",
"ver: 2.4.x"
] | MathiasBaumgartinger | 9 |
ets-labs/python-dependency-injector | flask | 452 | Pyright: @injected providers throw type errors | When using [Pyright](https://github.com/microsoft/pyright) for typechecking, injected provider arguments raise type errors.
For example, in the flask miniapp (`examples/miniapps/flask/`), the full output from running `pyright githubnavigator/views.py` from the root of the flask example app is below:
```
Searching for source files
Found 1 source file
/home/****/Code/ets-labs/python-dependency-injector/examples/miniapps/flask/githubnavigator/views.py
12:49 - error: Expected class type but received "Configuration" (reportGeneralTypeIssues)
12:49 - error: Expected no type arguments (reportGeneralTypeIssues)
12:41 - error: Expression of type "Type[Provide[Configuration]]" cannot be assigned to parameter of type "SearchService"
"Type[ClassGetItemMeta]" is incompatible with "Type[SearchService]" (reportGeneralTypeIssues)
13:38 - error: Expected class type but received "Configuration" (reportGeneralTypeIssues)
13:38 - error: Expected class type but received "ConfigurationOption" (reportGeneralTypeIssues)
13:38 - error: Expected class type but received "ConfigurationOption" (reportGeneralTypeIssues)
13:38 - error: Expected no type arguments (reportGeneralTypeIssues)
13:30 - error: Expression of type "Type[Provide[ConfigurationOption]]" cannot be assigned to parameter of type "str"
"Type[ClassGetItemMeta]" is incompatible with "Type[str]" (reportGeneralTypeIssues)
14:38 - error: Expected class type but received "TypedConfigurationOption[int]" (reportGeneralTypeIssues)
14:38 - error: Expected no type arguments (reportGeneralTypeIssues)
14:30 - error: Expression of type "Type[Provide[TypedConfigurationOption[int]]]" cannot be assigned to parameter of type "int"
"Type[ClassGetItemMeta]" is incompatible with "Type[int]" (reportGeneralTypeIssues)
11 errors, 0 warnings, 0 infos
Completed in 1.186sec
```
Edit: Pre-emptively published. | open | 2021-04-25T05:45:25Z | 2022-09-09T13:06:24Z | https://github.com/ets-labs/python-dependency-injector/issues/452 | [] | jack-michaud | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 510 | 400错误! | {
"detail": {
"code": 400,
"message": "An error occurred.",
"support": "Please contact us on Github: https://github.com/Evil0ctal/Douyin_TikTok_Download_API",
"time": "2024-10-17 06:42:01",
"router": "/api/douyin/web/fetch_user_post_videos",
"params": {
"sec_user_id": "MS4wLjABAAAA9jfPF6GlW7GUaZqRrlBiPHFiYqV8Mv5Fvy2QYIWOkNws0hrC0eoB07VkTwH9ABiF",
"max_cursor": "0",
"count": "20"
}
}
}
请问是cookie问题吗? | closed | 2024-11-24T08:57:27Z | 2024-11-28T05:10:45Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/510 | [
"BUG",
"enhancement"
] | xinchengyou1987 | 3 |
microsoft/qlib | machine-learning | 1,567 | 请问在yaml文件中,<MODEL> <DATASET> 的作用是什么?为什么要这样写? | ## ❓ Questions and Help
请问在yaml文件中,<MODEL> <DATASET> 的作用是什么?为什么要这样写?
| closed | 2023-06-25T05:52:56Z | 2023-10-24T03:09:15Z | https://github.com/microsoft/qlib/issues/1567 | [
"question"
] | moesakura | 3 |
piskvorky/gensim | data-science | 3,316 | corpora.TextDirectoryCorpus fails on utf-8 encoded files on windows | #### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
I have a directory of utf-8 encoded files (scraped Reddit submissions selftext i.e. text in reddit posts) in plain text. I wanted to create a corpus using gensim.corpora.TextDirectoryCorpus(<dir_of_scraped_plaintexts>). I expect this to run without error and return a working corpus. I see a UnicodeDecodeError instead: (not to be confused with the UnicodeDecodeError in the [FAQ Q10](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ#q10-loading-a-word2vec-model-fails-with-unicodedecodeerror-utf-8-codec-cant-decode-bytes-in-position-)
<details> <summary>Stack Trace</summary>
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_13728\103931668.py in <module>
1 # load all selftext into gensim
2 all_selftext_dir = Path.cwd() / 'data/all_selftexts'
----> 3 corpus = gensim.corpora.TextDirectoryCorpus(str(all_selftext_dir))
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in __init__(self, input, dictionary, metadata, min_depth, max_depth, pattern, exclude_pattern, lines_are_documents, **kwargs)
433 self.exclude_pattern = exclude_pattern
434 self.lines_are_documents = lines_are_documents
--> 435 super(TextDirectoryCorpus, self).__init__(input, dictionary, metadata, **kwargs)
436
437 @property
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in __init__(self, input, dictionary, metadata, character_filters, tokenizer, token_filters)
181 self.length = None
182 self.dictionary = None
--> 183 self.init_dictionary(dictionary)
184
185 def init_dictionary(self, dictionary):
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in init_dictionary(self, dictionary)
203 metadata_setting = self.metadata
204 self.metadata = False
--> 205 self.dictionary.add_documents(self.get_texts())
206 self.metadata = metadata_setting
207 else:
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\dictionary.py in add_documents(self, documents, prune_at)
192
193 """
--> 194 for docno, document in enumerate(documents):
195 # log progress & run a regular check for pruning, once every 10k docs
196 if docno % 10000 == 0:
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in get_texts(self)
312 yield self.preprocess_text(line), (lineno,)
313 else:
--> 314 for line in lines:
315 yield self.preprocess_text(line)
316
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in getstream(self)
521 except Exception as e:
522 print(path)
--> 523 raise e
524 num_texts += 1
525
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in getstream(self)
518 else:
519 try:
--> 520 yield f.read().strip()
521 except Exception as e:
522 print(path)
D:\Work\Anaconda\envs\cs37\lib\encodings\cp1252.py in decode(self, input, final)
21 class IncrementalDecoder(codecs.IncrementalDecoder):
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
25 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1897: character maps to <undefined>
</details>
#### Steps/code/corpus to reproduce
**Platform Specific**: this error is reproducible on platforms where ```locale.gerpreferredencoding() == 'cp1252'``` i.e. it is reproducible only on some _Windows_ machines.
Consider this file: [encoding_err_txt.txt](https://github.com/RaRe-Technologies/gensim/files/8396053/encoding_err_txt.txt)
Place the above file in an empty directory, then run:
```
gensim.corpora.TextDirectoryCorpus(<path_to_dir>)
```
#### Versions
```
>>> import platform; print(platform.platform())
Windows-10-10.0.19041-SP0
>>> import sys; print("Python", sys.version)
Python 3.7.10 (default, Feb 26 2021, 13:06:18) [MSC v.1916 64 bit (AMD64)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.21.5
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.7.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.1.2
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
#### Additional Note
This issue seems to be caused by [gensim.corpora.textcorpus.py:513](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L513) where TextDirectoryCorpus.getstream() uses the python builtin _open()_ without specifying an _encoding=_ argument. This lets python defaults to using
``` python
locale.getpreferredencoding(False)
```
for an encoding to read the file. Unfortunately, on the aforementioned platform, the above line returns _cp1252_ which cannot decode some of the _utf-8_ characters.
##### Workarounds
_python 3.7 and later_: added [a UTF-8 mode](https://peps.python.org/pep-0540/) where python reads the environment variable "PYTHONUTF8" and sets the _sys.flags.utf8_mode_. If _sys.flags.utf8_mode == 1_, then _locale.getpreferredencoding(False) == "UTF-8"_ and TextDirectoryCorpus is able to load the file.
I spent a few hours tinkering around and reading up on some resources (example: [python's utf-8 mode](https://dev.to/methane/python-use-utf-8-mode-on-windows-212i), [changing locale to change the preferred encoding, did not work](https://stackoverflow.com/questions/27437325/windows-python-changing-encoding-using-the-locale-module)) before discovering the above workaround.
Overall I think this is an easy issue to fix (perhaps by adding an _encoding='utf-8'_ default keyword argument in [TextDirectoryCorpus.\_\_init\_\_(...)](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L401) and _self.encoding_ to [gensim.corpora.textcorpus.py:513](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L513)) and does not look like it will break anything. It should greatly increase the usability of TextDirectoryCorpus on Windows platforms.
Thanks ☺️ | closed | 2022-04-01T09:04:48Z | 2022-04-15T12:26:04Z | https://github.com/piskvorky/gensim/issues/3316 | [
"bug",
"difficulty easy",
"impact MEDIUM",
"reach LOW"
] | Sandman-Ren | 2 |
katanaml/sparrow | computer-vision | 7 | ModuleNotFoundError: No module named 'tools.utilities' | closed | 2023-06-01T19:09:49Z | 2023-06-20T18:33:24Z | https://github.com/katanaml/sparrow/issues/7 | [
"enhancement"
] | ghazirecinov | 2 | |
microsoft/UFO | automation | 138 | Support for Automatic Task Execution from JSON Files | Currently, UFO operates through command-line interaction, where users input commands one by one for the agent to execute tasks.
I would like to request support for enabling the agent to execute a predefined sequence of tasks from an, let's say JSON file. By defining tasks and their parameters in such a file, users could load it and allow the agent to execute the workflow automatically. This design would facilitate large-scale task automation for testing purposes, enabling comprehensive evaluation of UFO's performance across different types of tasks.
Does this feature align with the project’s scope? If so, I’d appreciate any guidance or suggestions on how best to implement it within the current architecture. Thank you for your time and support!
| closed | 2024-11-23T03:37:05Z | 2024-12-19T14:21:37Z | https://github.com/microsoft/UFO/issues/138 | [] | Calvvnono | 5 |
onnx/onnx | deep-learning | 6,264 | ERROR: Could not build wheels for onnx, onnxsim, imagededup, which is required to install pyproject.toml-based projects | ### Question
I am installing the requirements for using YoloNAS in Deepstream 6.2 as per the instructions below
https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLONAS.md
While installing the requirements in super-gradients I am having error while building onnx
Process exited with code 1
### Further information
Ubuntu Version - 20.04.6
Python Version - 3.12.4
Nvidia Driver - 535.183.01
onnxruntime>=1.15.0
onnx==1.15.0
### Notes
Building wheel for onnxsim (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [148 lines of output]
<string>:28: DeprecationWarning: Use shutil.which instead of find_executable
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
running bdist_wheel
running build
running build_py
running create_version
creating build
creating build/lib.linux-x86_64-cpython-312
creating build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/model_info.py -> build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/__main__.py -> build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/version.py -> build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/__init__.py -> build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/onnx_simplifier.py -> build/lib.linux-x86_64-cpython-312/onnxsim
copying onnxsim/model_checking.py -> build/lib.linux-x86_64-cpython-312/onnxsim
running egg_info
writing onnxsim.egg-info/PKG-INFO
writing dependency_links to onnxsim.egg-info/dependency_links.txt
writing entry points to onnxsim.egg-info/entry_points.txt
writing requirements to onnxsim.egg-info/requires.txt
writing top-level names to onnxsim.egg-info/top_level.txt
reading manifest file 'onnxsim.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'onnxsim'
warning: no files found matching '*.proto' under directory 'onnxsim'
warning: no previously-included files matching '*' found under directory 'third_party/onnxruntime'
warning: no previously-included files matching '*' found under directory 'third_party/onnx-optimizer/build'
warning: no previously-included files matching '*' found under directory 'third_party/onnx/build'
warning: no previously-included files matching '*' found under directory 'third_party/onnx/onnx/backend'
adding license file 'LICENSE'
writing manifest file 'onnxsim.egg-info/SOURCES.txt'
/tmp/pip-build-env-glo09bto/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:215: _Warning: Package 'onnxsim.bin' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'onnxsim.bin' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'onnxsim.bin' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'onnxsim.bin' to be distributed and are
already explicitly excluding 'onnxsim.bin' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
| closed | 2024-07-31T11:50:36Z | 2025-01-24T00:43:42Z | https://github.com/onnx/onnx/issues/6264 | [
"question"
] | maze2498 | 1 |
modelscope/data-juicer | streamlit | 473 | How to calculate the image_text_similarity scores for both Chinese and English? | Thank you for your excellent work.
Regarding my dataset, which includes both English and Chinese samples, I am wondering how I can simultaneously calculate the similarity scores between image and text pairs for both languages.
| closed | 2024-11-05T07:04:28Z | 2025-02-07T03:37:18Z | https://github.com/modelscope/data-juicer/issues/473 | [
"question",
"dj:multimodal",
"stale-issue",
"dj:op"
] | weiaicunzai | 4 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 751 | Where is the tensorflow version? | I think I am more comfortable with the Tensorflow version, but I really can't find it, I found about a commit 5425557, but I don't seem to find the code | closed | 2021-05-09T06:30:10Z | 2021-05-30T07:34:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/751 | [] | bosecodes | 1 |
keras-team/keras | machine-learning | 20,093 | TImeDistributed + Add Layers Error | Using TensorFlow==2.17.0 , Keras==3.4.1 I am having issues when trying to use the Add Layer together with the TImeDistributed one:
```
X = TimeDistributed(Add(), name='add_residual_convolution_' + str(it))([X, X_residual])
ValueError: `TimeDistributed` Layer should be passed an `input_shape` with at least 3 dimensions, received: [(None, 12, 0, 2), (None, 12, 0, 2)]
```
I have also tried passing the `input_shape=X.shape` argument to TimeDistributed, but the same error appears.
How can I solve this? | closed | 2024-08-07T08:55:39Z | 2025-02-01T02:02:49Z | https://github.com/keras-team/keras/issues/20093 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | marta-q | 12 |
encode/databases | asyncio | 234 | Documentation lacks guidance on creating database tables | The [Databases Introduction](https://www.encode.io/databases/) executes a raw SQL query like so:
```python
query = """CREATE TABLE HighScores (id INTEGER PRIMARY KEY, name VARCHAR(100), score INTEGER)"""
await database.execute(query=query)
```
The next page, [Database Queries](https://www.encode.io/databases/database_queries/) instead uses SQLAlchemy Core like so:
```python
import sqlalchemy
metadata = sqlalchemy.MetaData()
notes = sqlalchemy.Table(
"notes",
metadata,
sqlalchemy.Column("id", sqlalchemy.Integer, primary_key=True),
sqlalchemy.Column("text", sqlalchemy.String(length=100)),
sqlalchemy.Column("completed", sqlalchemy.Boolean),
)
...
from databases import Database
database = Database('postgresql://localhost/example')
# Establish the connection pool
await database.connect()
# Execute
query = notes.insert()
values = {"text": "example1", "completed": True}
await database.execute(query=query, values=values)
```
Nowhere actually shows how to create the tables. **I think the documentation could do a better job at showing a full example that runs and works**.
The documentation does link to [SQLAlchemy core official tutorial](https://docs.sqlalchemy.org/en/latest/core/tutorial.html), however this tutorial talks about "engines" which are nowhere to be found in the `databases` library, so it's not easy to find out how to actually create a table or adapt their examples. One may naively try to use the `database` as the `engine`, which fails with a rather cryptic "AttributeError: 'Database' object has no attribute '_run_visitor'" error.
**I think the documentation could do a better job at explaining how one should map the concepts from the SQLAlchemy tutorial for use in this library**.
Finally, the documentation does not seem to provide a reference section to learn all the methods available and look at their internal documentation without the need for Python's `help()`. **I think the documentation should have a section with the reference documentation for all public types**.
All of this is from the point of view who wants to use `databases` as their only interface to databases, and has no experience with `sqlalchemy`. Given that `sqlalchemy` is the recommended way to use it, I think the library should put some effort into explaining how the both libraries work together. | closed | 2020-08-10T21:41:05Z | 2022-12-01T13:43:39Z | https://github.com/encode/databases/issues/234 | [] | Lonami | 7 |
Yorko/mlcourse.ai | numpy | 738 | Proofread topic 2 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-02-04T13:53:11Z | 2024-08-25T07:43:16Z | https://github.com/Yorko/mlcourse.ai/issues/738 | [
"enhancement",
"articles"
] | Yorko | 0 |
chrieke/prettymapp | streamlit | 67 | feature request: elevation "iso-lines" | Thanks a lot for your work!
I was wondering if there's a way to draw contour lines on the map. It's not clear to me if OpenStreetMap provides elevation data. | closed | 2024-12-22T09:58:46Z | 2024-12-28T18:40:46Z | https://github.com/chrieke/prettymapp/issues/67 | [] | JulienMaille | 1 |
huggingface/datasets | numpy | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | ### Describe the bug
[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`.
### Steps to reproduce the bug
```
>>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2")
>>> model.config.id2label
{0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'}
>>> dataset = load_dataset("speech_commands", "v0.02", split="test")
>>> torch.unique(torch.Tensor(dataset['label']))
tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,
14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,
28., 29., 30., 31., 32., 33., 34., 35.])
```
If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`.
### Expected behavior
The labels should match completely and there should be the same number of label classes between the model config and the dataset itself.
### Environment info
datasets = 2.14.6, transformers = 4.33.3 | closed | 2023-11-22T20:46:36Z | 2023-11-28T14:46:08Z | https://github.com/huggingface/datasets/issues/6446 | [] | vymao | 3 |
graphql-python/graphene-sqlalchemy | graphql | 65 | Subclassing Connection in 2.0 | Hi, in 2.0dev version I can't see the way to subclass `relay.Connection`. Because SQLAlchemyObjectType doesn't have a Connection attribute, the only way I see is to pass connection via the Meta class. Otherwise the connection is automatically created with default Connection class. But I can't specify the subclassed connection in Meta, because it needs a node attribute and I get circular reference. E. g.
```python
class UserConnection(graphene.relay.Connection):
class Meta:
node = User
class User(SQLAlchemyObjectType):
class Meta:
model = UserModel
interfaces = (relay.Node, )
connection = UserConnection
```
So I don't understand what's the point of connection parameter in SQLAlchemyObjectType Meta class? To make it work, I have changed SQLAlchemyObjectType.\_\_init_subclass_with_meta_\_ and introduced connection_type parameter. Now I can make an abstract subclassed Connection and pass it to Meta:
```python
class SQLAlchemyObjectType(ObjectType):
@classmethod
def __init_subclass_with_meta__(cls, model=None, registry=None, skip_registry=False,
only_fields=(), exclude_fields=(), connection=None, connection_type=None,
use_connection=None, interfaces=(), id=None, **options):
...
if use_connection and not connection:
# We create the connection automatically
connection = (connection_type or Connection).create_type('{}Connection'.format(cls.__name__), node=cls)
...
```
```python
class UserConnection(graphene.relay.Connection):
class Meta:
abstract = True
class User(SQLAlchemyObjectType):
class Meta:
model = UserModel
interfaces = (relay.Node, )
connection_type = UserConnection
```
Maybe I don't understand something and there is an easier way to make it work? | closed | 2017-08-06T02:10:06Z | 2023-02-25T00:48:34Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/65 | [] | nmrtv | 4 |
aeon-toolkit/aeon | scikit-learn | 1,734 | [ENH] Remove inheritance from BaseTransformer to BaseCollectionTransformer and remove CollectionToSeriesWrapper | ### Describe the feature or idea you want to propose
with the imminent deprecation of BaseTransformer, we should remove its uses in preparation
### Describe your proposed solution
Remove inheritance in BaseCollectionTransformer and remove CollectionToSeriesWrapper
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | closed | 2024-06-27T20:16:51Z | 2024-09-17T20:40:38Z | https://github.com/aeon-toolkit/aeon/issues/1734 | [
"enhancement",
"transformations"
] | TonyBagnall | 1 |
widgetti/solara | fastapi | 169 | ipywidgets slider not working with Solara >v1.17.0 | The ipywidget slider used to work fine with Solara v1.17.0 and earlier.

However, the slider no longer show up with Solara v1.17.1. Is there any changes recently that affect ipywdiget slider?

| closed | 2023-06-23T03:49:24Z | 2024-01-16T15:45:29Z | https://github.com/widgetti/solara/issues/169 | [] | giswqs | 6 |
jmcarpenter2/swifter | pandas | 203 | Inform the user whether multiprocessing was used | Hi, thanks for this cool library.
One thing that would be nice would be to tell the user what swifter decided to do, e.g.:
- was it able to vectorize?
- did it choose to apply multiprocessing with Dask?
Right now it seems everything is totally transparent to the user; I cannot easily tell if swifter is even using more than one core. | open | 2022-10-04T21:39:15Z | 2023-03-24T17:32:42Z | https://github.com/jmcarpenter2/swifter/issues/203 | [
"enhancement"
] | tadamcz | 1 |
graphql-python/graphene-django | django | 551 | How to resolve many-to-many relation with "through" option | I have raised a question in stack overflow. Here is the link
[https://stackoverflow.com/questions/53013384/how-to-resolve-graphene-django-node-field-with-many-to-many-relationship](url)
I was expecting it to be auto resolved as the relations are defined in models. Looking forward to get some help.
| closed | 2018-11-05T16:57:04Z | 2021-04-14T20:10:07Z | https://github.com/graphql-python/graphene-django/issues/551 | [
"question"
] | rajkrishnanind | 1 |
aeon-toolkit/aeon | scikit-learn | 2,302 | [dgr/timeseriesscaler] is STALE | @dguijo,
dgr/timeseriesscaler has had no activity for 258 days.
This branch will be automatically deleted in 0 days. | closed | 2024-11-04T01:28:16Z | 2024-11-11T01:28:39Z | https://github.com/aeon-toolkit/aeon/issues/2302 | [
"stale branch"
] | aeon-actions-bot[bot] | 0 |
microsoft/hummingbird | scikit-learn | 229 | Enable cross exporting between models | For the moment we only allow:
- sklearn, xgb and lgbm to be exported in pytorch \ torchscript format
- onnx-ml models to be exported into onnx
We already have the code setup to remove this limitation and allow any supported input model to be exported into any supported backend. | closed | 2020-08-13T22:33:40Z | 2020-08-13T23:16:25Z | https://github.com/microsoft/hummingbird/issues/229 | [] | interesaaat | 0 |
babysor/MockingBird | deep-learning | 60 | 补充教程 | closed | 2021-08-28T00:29:54Z | 2021-08-29T12:05:14Z | https://github.com/babysor/MockingBird/issues/60 | [] | babysor | 0 | |
pnkraemer/tueplots | matplotlib | 76 | Font seems off | Hi,
I tried using tueplots but encountered some weird bug, where my plot got from this:
<img src="https://user-images.githubusercontent.com/49155617/150942802-88e53b62-ee2c-4ab7-b78f-894d56efacf4.png" alt="im1" width="500"/>
to this:
<img src="https://user-images.githubusercontent.com/49155617/150942803-240a0e0e-3a81-412d-b2ce-e487e478b0fa.png" alt="im2" width="300"/>
just by removing defined figsize and adding:
```
from tueplots import bundles
plt.rcParams.update(bundles.neurips2021(usetex=False, ncols=2, nrows=1))
```
Example code can be found [here](https://colab.research.google.com/drive/13V8T4KK59k-xhXDgoshbuIRRo_vBan8T#scrollTo=rqdTg9UcWly2).
Is this a bug or am I missing something? | closed | 2022-01-25T08:52:03Z | 2022-01-25T10:54:36Z | https://github.com/pnkraemer/tueplots/issues/76 | [] | dmandic17 | 4 |
vastsa/FileCodeBox | fastapi | 132 | 文件上传时是否支持分片上传 | 请问是否支持分片上传,因为对于比较大的文件,即使超时时间设置的很长,也会出异常 | closed | 2024-02-02T11:05:17Z | 2025-02-06T12:14:19Z | https://github.com/vastsa/FileCodeBox/issues/132 | [] | leim | 1 |
tortoise/tortoise-orm | asyncio | 1,262 | Unable to create Model with custom PrimaryKey | **Describe the bug**
when creating a model, defining an attribute to be (pk=True) causes tortoise to throw the below error
`tortoise.exceptions.ConfigurationError: Can't create model Post with two primary keys, only single primary key is supported`
It seems to work fine if I explicitly define id as not being the PK
` id=fields.SmallIntField(pk=False,required=False)
`
**To Reproduce**
Steps to reproduce the behavior, preferably a small code snippet.
Create a new model with a custom field as the primary key, but Do not explicitly define id:
`class Post(BaseModel):
id=fields.SmallIntField(pk=False,required=False)
PostID = fields.SmallIntField(pk=True,unique=True)
RawPost= fields.CharField(max_length=1600)
Timestamp=fields.DatetimeField()
Channel = fields.CharField(max_length=100)
def __str__(self) -> str:
return super(self.PostID).__str__()`
**Expected behavior**
A clear and concise description of what you expected to happen.
Tortoise being smart enough to notice I'm defining a custom PK, without having to explicitly define id
**Additional context**
Add any other context about the problem here.
The Docs do not mention requiring to explicitly define id's pk as false.
| open | 2022-09-24T22:01:25Z | 2022-12-04T14:28:05Z | https://github.com/tortoise/tortoise-orm/issues/1262 | [] | Tetrokun | 1 |
OpenInterpreter/open-interpreter | python | 893 | How can make the results feedback faster without showing the process of code generation and code running? | ### Is your feature request related to a problem? Please describe.
Great work, thank you for the open source and effort, I have some questions, if I want to make it faster, even without showing me the process of code generation, execution, I tested it, reading from csv and doing A histogram takes 30-45 seconds. If I don’t need him to show me the process of writing the code and the process of running the code, can this make the results faster?
### Describe the solution you'd like
reading from csv and doing A histogram takes 30-45 seconds.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-01-09T11:11:27Z | 2024-10-31T16:58:51Z | https://github.com/OpenInterpreter/open-interpreter/issues/893 | [
"Enhancement"
] | rainbownmm | 0 |
kennethreitz/responder | graphql | 393 | Plans on implementing Graphene Subscriptions? | A few implementation examples here;
https://github.com/graphql-python/graphql-ws
I suppose the already implemented WebSockets support would be a good fit for this. | closed | 2019-09-25T23:42:27Z | 2024-03-31T00:57:29Z | https://github.com/kennethreitz/responder/issues/393 | [] | dsod | 0 |
snarfed/granary | rest-api | 173 | Mastodon: how to handle domain blocks? | i haven't implemented block lists in Mastodon yet since Mastodon lets users [block entire domains](https://docs.joinmastodon.org/usage/privacy/#hiding-an-entire-server) ([API](https://docs.joinmastodon.org/api/rest/domain-blocks/)), ie other instances, as well as individual users. i haven't thought through how to represent this in AS1 yet. | open | 2019-10-21T22:05:24Z | 2019-10-21T22:05:24Z | https://github.com/snarfed/granary/issues/173 | [] | snarfed | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 1,705 | Show sign in page after user log out when using a single oauth2 provider | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
### Environment
Flask-Appbuilder version: 3.3.3
pip freeze output:
### Describe the expected results
Given a **single** OAuth2 provider configured, when a signed in user tries to log out, it should log out user.
https://github.com/dpgaspar/Flask-AppBuilder/blob/211284bc5792d2a24a486f5c2b0f4fc86de89fbe/flask_appbuilder/security/views.py#L607-L615
### Describe the actual results
User will be automatically signed in after trying logging out.
```pytb
Paste the full traceback if there was an exception.
```
### Steps to reproduce
When a sigined in over the configured OAuth2 provider, after visit `/logout`, it will redirect user back to `/login` then sign in user automatically since the logout flow really doesn't log user out of OAuth2 provider. | closed | 2021-09-30T15:21:40Z | 2021-12-07T16:33:51Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1705 | [] | shawnzhu | 11 |
iperov/DeepFaceLab | deep-learning | 5,346 | Nina | closed | 2021-06-09T01:34:45Z | 2021-06-09T16:47:28Z | https://github.com/iperov/DeepFaceLab/issues/5346 | [] | Mitan01 | 0 | |
agronholm/anyio | asyncio | 131 | How should we implement forceful close of async resources? | Sometimes we need to close connections and other async resources forcefully. To this end, trio has a function named `aclose_forcefully()` which creates a cancel scope and within it, cancels the scope and *then* calls `aclose()` on the resource.
My other idea is that we could add a parameter `force: bool = False` to the `AsyncResource.aclose()` interface.
Pros of `force`:
- Cleaner (?)
- The callee has better control over the procedure
Cons:
- Another divergence from the trio API
- Requires more care from the developer of the resource | closed | 2020-07-27T09:53:22Z | 2020-07-31T13:45:59Z | https://github.com/agronholm/anyio/issues/131 | [
"design"
] | agronholm | 1 |
xinntao/Real-ESRGAN | pytorch | 793 | gt_size: 128 | can i get consistent results when i'm using ```gt_size: 128``` with 2x finetune? (64x64LR 128x128HR paired dataset)
my models are suddenly twice collapsed already (maybe due to high lr)
<details>


</details>
also can be noise reducers finetuned from released general upscale models?
or for this needs training from scratch? | open | 2024-05-04T04:31:50Z | 2024-05-06T14:53:07Z | https://github.com/xinntao/Real-ESRGAN/issues/793 | [] | SA-j00u | 0 |
litestar-org/litestar | api | 3,409 | Bug: can't upload multiple files where `data: Optional[List[UploadFile]]` | ### Description
The following test fails for the 2 files case.
### URL to code causing the issue
_No response_
### MCVE
```python
@pytest.mark.parametrize("file_count", (0, 1, 2))
def test_upload_multiple_files_optional(file_count: int) -> None:
@post("/")
async def handler(data: Optional[List[UploadFile]] = Body(media_type=RequestEncodingType.MULTI_PART)) -> None:
if data is None:
return
assert len(data) == file_count
for file in data:
assert await file.read() == b"1"
with create_test_client([handler], openapi_config=None) as client:
files_to_upload = [("file", b"1") for _ in range(file_count)]
response = client.post("/", files=files_to_upload or None)
assert response.status_code == HTTP_201_CREATED
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 94, in _deserializer
return default_deserializer(target_type, value)
File "/home/peter/PycharmProjects/litestar/litestar/serialization/msgspec_hooks.py", line 127, in default_deserializer
raise TypeError(f"Unsupported type: {type(value)!r}")
TypeError: Unsupported type: <class 'list'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 194, in parse_values_from_connection_kwargs
return convert(kwargs, cls, strict=False, dec_hook=deserializer, str_keys=True).to_dict()
msgspec.ValidationError: Unsupported type: <class 'list'> - at `$.data[0]`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/middleware/exceptions/middleware.py", line 216, in __call__
await self.app(scope, receive, send)
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 82, in handle
response = await self._get_response_for_request(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 134, in _get_response_for_request
return await self._call_handler_function(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 154, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 193, in _get_response_data
parsed_kwargs = route_handler.signature_model.parse_values_from_connection_kwargs(
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 207, in parse_values_from_connection_kwargs
raise cls._create_exception(messages=messages, connection=connection) from e
litestar.exceptions.http_exceptions.ValidationException: 400: Validation failed for POST /
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 94, in _deserializer
return default_deserializer(target_type, value)
File "/home/peter/PycharmProjects/litestar/litestar/serialization/msgspec_hooks.py", line 127, in default_deserializer
raise TypeError(f"Unsupported type: {type(value)!r}")
TypeError: Unsupported type: <class 'list'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 194, in parse_values_from_connection_kwargs
return convert(kwargs, cls, strict=False, dec_hook=deserializer, str_keys=True).to_dict()
msgspec.ValidationError: Unsupported type: <class 'list'> - at `$.data[0]`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/peter/PycharmProjects/litestar/litestar/middleware/exceptions/middleware.py", line 216, in __call__
await self.app(scope, receive, send)
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 82, in handle
response = await self._get_response_for_request(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 134, in _get_response_for_request
return await self._call_handler_function(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 154, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
File "/home/peter/PycharmProjects/litestar/litestar/routes/http.py", line 193, in _get_response_data
parsed_kwargs = route_handler.signature_model.parse_values_from_connection_kwargs(
File "/home/peter/PycharmProjects/litestar/litestar/_signature/model.py", line 207, in parse_values_from_connection_kwargs
raise cls._create_exception(messages=messages, connection=connection) from e
litestar.exceptions.http_exceptions.ValidationException: 400: Validation failed for POST /
```
### Litestar Version
main
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-04-21T00:01:24Z | 2025-03-20T15:54:37Z | https://github.com/litestar-org/litestar/issues/3409 | [
"Bug :bug:"
] | peterschutt | 1 |
gto76/python-cheatsheet | python | 139 | Шпаргалка python | closed | 2022-12-16T12:46:23Z | 2022-12-16T12:48:11Z | https://github.com/gto76/python-cheatsheet/issues/139 | [] | YarKoniukhov | 0 | |
Lightning-AI/LitServe | rest-api | 88 | Too many files open with 1K requests | Sending request to `request_buffer` creates a new pipe and too many requests causes `OS too many files open` error
```py
self.app.request_buffer[uid] = (await request.json(), write)
```
cc: @lantiga
---
```
INFO: 127.0.0.1:45038 - "POST /predict HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pycharm_project_34/src/litserve/server.py", line 395, in predict
self.app.request_buffer[uid] = (await request.json(), write)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "<string>", line 2, in __setitem__
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/managers.py", line 820, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/connection.py", line 1172, in reduce_connection
df = reduction.DupFd(conn.fileno())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/reduction.py", line 198, in DupFd
return resource_sharer.DupFd(fd)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aniket/miniconda3/envs/am/lib/python3.12/multiprocessing/resource_sharer.py", line 48, in __init__
new_fd = os.dup(fd)
^^^^^^^^^^
OSError: [Errno 24] Too many open files
``` | closed | 2024-05-14T17:51:42Z | 2024-05-18T00:42:41Z | https://github.com/Lightning-AI/LitServe/issues/88 | [
"bug",
"help wanted"
] | aniketmaurya | 3 |
autokey/autokey | automation | 973 | Add pyasyncore for Python 3.12 support | Use this line in the `setup.cfg` file to make it so that the **pyasyncore** module is only installed during `tox` testing for Python 3.12.0 or later:
```
pyasyncore; python_version>='3.12'
```
Make this change on the **master** and **develop** branches.
_Based on a post by @dlk3 in https://github.com/autokey/autokey/issues/964#issuecomment-2480907541_
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #973</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @Elliria</code> (replace <code>20</code> with the amount, and <code>@Elliria</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details> | open | 2024-11-17T22:44:51Z | 2024-12-23T14:52:57Z | https://github.com/autokey/autokey/issues/973 | [
"infrastructure",
"environment"
] | Elliria | 0 |
desec-io/desec-stack | rest-api | 147 | Remove RRset write-support for locked users | Currently, locked users can write RRsets to the application database. Only upon unlocking the user account, these RRsets will be propagated to pdns.
This leads to various difficulties regarding validation and consistency, see https://github.com/PowerDNS/pdns/issues/7565#issuecomment-476397025
As the benefit of the feature is questionable, nobody will miss it if it's not there, and it even may lead to confusion ("My PATCH was successful, but changes don't get propagated?!"), it is better to remove the feature. | closed | 2019-03-26T08:55:12Z | 2019-04-17T09:34:23Z | https://github.com/desec-io/desec-stack/issues/147 | [
"bug",
"api",
"prio: medium",
"easy"
] | peterthomassen | 1 |
dbfixtures/pytest-postgresql | pytest | 583 | Unable to start postgres "factories.postgresql_proc" | ### What action do you want to perform
We are running `pytest` under `image: python:3.7-alpine`, with some pre installations:
```
apk update && apk add postgresql-dev gcc libc-dev linux-headers python3-dev musl-dev && rm -rf /var/cache/apk/*
- psycopg2-binary>=2.9.3
- pytest-postgresql==3.1.3
```
Example of our code in our test file
```
socket_dir = tempfile.TemporaryDirectory()
postgresql_my_proc = factories.postgresql_proc(port=None, unixsocketdir=socket_dir.name)
postgresql_my = factories.postgresql("postgresql_my_proc")
@pytest.fixture(scope="function")
def setup_db(postgresql_my):
def dbcreator():
return postgresql_my.cursor().connection
engine = create_engine("postgresql+psycopg2://", creator=dbcreator)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
yield session
session.close()
```
### What are the results
The error that we got
```
FileNotFoundError: [Errno 2] No such file or directory: '/usr/libexec/postgresql14/pg_ctl': '/usr/libexec/postgresql14/pg_ctl'
```
It seems like postgresql_proc is not able to find `postgresql_proc`.
Greatly appreciated it if anyone can help us on this issue. 😊
| closed | 2022-03-24T20:51:59Z | 2023-05-19T11:10:35Z | https://github.com/dbfixtures/pytest-postgresql/issues/583 | [] | zhangchi1 | 1 |
erdewit/ib_insync | asyncio | 515 | Borrow rate | Borrow rate (for shorting shares) is available in the undocumented IB API, through generic tick 499:
https://groups.io/g/twsapi/topic/28217344#41103
This is how TWS manages to display borrow rates in the live UI.
However this generic tick; althought stable, is not documented officially by IB yet. Consequently, it is not available in ib_insync, despite being very vital for shorting strategies.
Is ib_insync's intent to stick 100% to the documented part of the API, or is it possible today to push a PR that would integrate these 2 new fields into the Ticker (fee rate and borrow rate)? In such case I can propose a pull request. | closed | 2022-10-21T17:27:52Z | 2022-10-22T17:15:25Z | https://github.com/erdewit/ib_insync/issues/515 | [] | rgeronimi | 5 |
pallets-eco/flask-sqlalchemy | flask | 707 | Remove "convert_unicode" | Hi,
The "convert_unicode" argument is being used in flask-sqlalchemy, however it's deprecated in SQLAlchemy:
```
SADeprecationWarning: The create_engine.convert_unicode parameter and corresponding dialect-level parameters are deprecated, and will be removed in a future release. Modern DBAPIs support Python Unicode natively and this parameter is unnecessary.
default.DefaultDialect.__init__(self, **kwargs)
```
So the options here should probably be an empty dict, right?: https://github.com/pallets/flask-sqlalchemy/blob/50944e77522d4aa005fc3c833b5a2042280686d3/flask_sqlalchemy/__init__.py#L558
Thanks,
Diogo | closed | 2019-03-26T19:10:48Z | 2020-12-05T20:37:24Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/707 | [] | diogobaeder | 2 |
holoviz/panel | jupyter | 7,771 | Feature: A lighter weight pn.Card | With the advent of LLMs, I see a lot of collapsible divs, e.g.
Claude:
<img width="748" alt="Image" src="https://github.com/user-attachments/assets/4721460a-6c04-4fe4-a7a2-cf8547848009" />
Cursor:
<img width="395" alt="Image" src="https://github.com/user-attachments/assets/965cd583-eedb-49a9-bbcd-0603c3d3ee34" />
These are very thin and do not occupy a lot of space, and I'd like to propose something like `Details`

From:
https://blog.holoviz.org/posts/reactivehtml/index.html
I imagine this thinner collapsible `Details` could also be usefully nested inside `pn.Card`, e.g. nesting the following snippet:
<img width="620" alt="Image" src="https://github.com/user-attachments/assets/cc627d86-23a2-486d-bd03-073ecafc965d" /> | open | 2025-03-11T18:38:03Z | 2025-03-13T11:24:36Z | https://github.com/holoviz/panel/issues/7771 | [] | ahuang11 | 0 |
wkentaro/labelme | computer-vision | 1,153 | labelme can not input chinese, only English | ### Provide environment information
/anaconda3/bin/python
Python 3.6.7
labelme 5.0.1
### What OS are you using?
ubantu 16
### Describe the Bug
labelme can not input chinese, only English
### Expected Behavior
_No response_
### To Reproduce
_No response_ | closed | 2022-08-02T12:18:21Z | 2022-09-26T14:54:08Z | https://github.com/wkentaro/labelme/issues/1153 | [
"issue::bug"
] | cqray1990 | 1 |
dmlc/gluon-cv | computer-vision | 1,780 | Vulnerability: code injection | Hi, Your program running a bug bounty program at huntr.com. I have reported a dangerous security vulnerability, please connect to the admin to see details at https://huntr.com/bounties/f7258a30-fc9e-472b-ad23-adba90283e71 (private link) | closed | 2024-04-19T02:37:45Z | 2024-08-15T06:39:57Z | https://github.com/dmlc/gluon-cv/issues/1780 | [
"Stale"
] | h2oa | 1 |
tflearn/tflearn | data-science | 652 | FineTuning VGGNet | Hi,
Thanks for this repo!
I am trying to finetune VGGNet using the code given in the example. I tried the image_preloader on my dataset, but it threw errors while loading a few files (which PIL did not recognise to be images).
I tried to delete those files and I am trying to build a hdf5 dataset. However, I get the error: `TypeError: Can't broadcast (227, 227, 4) -> (1, 227, 227, 3)`
Any ideas on how I can solve this? It would be great if a functionality could be added to skip files which are not recognised to be image files. | open | 2017-03-06T18:59:19Z | 2017-03-07T22:16:45Z | https://github.com/tflearn/tflearn/issues/652 | [] | chandrasg | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.