Meta Llama org
No description provided.
Meta Llama org

@ArthurZ Somehow flex_attention give me this error:

Some kwargs in processor config are unused and will not have any effect: fake_image_token.
Fetching 55 files: 100%|██████████████████████████████| 55/55 [00:01<00:00, 38.01it/s]
Loading checkpoint shards: 100%|██████████████████████| 55/55 [05:34<00:00,  6.09s/it]
Some parameters are on the meta device because they were offloaded to the cpu.
Traceback (most recent call last):
  File "/home/kaiwu/llama4/old_trans/test_dum.py", line 35, in <module>
    outputs = model.generate(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/generation/utils.py", line 2457, in generate
    result = self._sample(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/generation/utils.py", line 3423, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/accelerate/hooks.py", line 176, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1736, in forward
    image_features = self.get_image_features(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1645, in get_image_features
    image_outputs = self.vision_model(pixel_values, output_hidden_states=False, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/accelerate/hooks.py", line 176, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1550, in forward
    output = self.model(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/accelerate/hooks.py", line 176, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1384, in forward
    layer_outputs = encoder_layer(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/accelerate/hooks.py", line 176, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1296, in forward
    hidden_state, attn_weights = self.self_attn(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/accelerate/hooks.py", line 176, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/models/llama4/modeling_llama4.py", line 1241, in forward
    attn_output, attn_weights = attention_interface(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/integrations/flex_attention.py", line 220, in flex_attention_forward
    attn_output, attention_weights = compile_friendly_flex_attention(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/transformers/integrations/flex_attention.py", line 160, in compile_friendly_flex_attention
    return flex_attention_compiled(
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
    return fn(*args, **kwargs)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1222, in flex_attention
    _validate_embed_dim(query, key, value)
  File "/home/kaiwu/.conda/envs/final/lib/python3.10/site-packages/torch/nn/attention/flex_attention.py", line 1121, in _validate_embed_dim
    raise ValueError(
ValueError: NYI: Currently non power of 2 embedding dimension are not supported. Got E=88 and Ev=88.
Meta Llama org

Yes!

Cannot merge
This branch has merge conflicts in the following files:
  • README.md
  • special_tokens_map.json
  • tokenizer_config.json
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment