Update config.json
5744ad4
verified
-
1.52 kB
initial commit
-
24 Bytes
initial commit
-
0 Bytes
Create __init__.py
-
1.16 kB
Update config.json
-
4.08 kB
Upload VINE model - model
laser_model_v1.pkl
Detected Pickle imports (32)
- "transformers.models.clip.processing_clip.CLIPProcessor",
- "transformers.models.clip.modeling_clip.CLIPMLP",
- "torch._utils._rebuild_parameter",
- "transformers.models.clip.modeling_clip.CLIPSdpaAttention",
- "tokenizers.models.Model",
- "transformers.models.clip.modeling_clip.CLIPModel",
- "torch.FloatStorage",
- "transformers.models.clip.tokenization_clip_fast.CLIPTokenizerFast",
- "tokenizers.AddedToken",
- "transformers.models.clip.modeling_clip.CLIPEncoder",
- "transformers.models.clip.configuration_clip.CLIPConfig",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.clip.modeling_clip.CLIPVisionEmbeddings",
- "tokenizers.Tokenizer",
- "transformers.activations.QuickGELUActivation",
- "torch.LongStorage",
- "__builtin__.set",
- "torch.nn.modules.normalization.LayerNorm",
- "transformers.models.clip.modeling_clip.CLIPTextTransformer",
- "transformers.models.clip.configuration_clip.CLIPTextConfig",
- "llava_clip_model_v3.PredicateModel",
- "_codecs.encode",
- "torch.nn.modules.conv.Conv2d",
- "collections.OrderedDict",
- "transformers.models.clip.configuration_clip.CLIPVisionConfig",
- "transformers.models.clip.modeling_clip.CLIPVisionTransformer",
- "transformers.models.clip.modeling_clip.CLIPEncoderLayer",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.clip.image_processing_clip.CLIPImageProcessor",
- "torch.nn.modules.linear.Linear",
- "transformers.models.clip.modeling_clip.CLIPTextEmbeddings"
How to fix it?
1.82 GB
Upload laser_model_v1.pkl
-
1.82 GB
Upload VINE model - model
-
4.93 kB
Upload 3 files
-
31.6 kB
Upload 3 files
-
30.4 kB
Upload 3 files
-
37.4 kB
Upload VINE model - model