ABCDEFGHIJKLMNOPQRSTUVWXYZAA
1
TitleGithub LinkComponent DRIDate Picked up Instructions:
2
[Bug]: Titan Embeddings fails when using the aws_bedrock_runtime_endpoint parameter
https://github.com/BerriAI/litellm/issues/8219
Bedrock
- Place your name + Date when you pick up an issue
3
[Bug]: Tool calling llama meta.llama3-3-70b-instruct-v1:0 not supported
https://github.com/BerriAI/litellm/issues/8094
Bedrock
- Each issue must be done 3 days after you set yourself as DRI
4
Bedrock Fallbacks not working
https://github.com/BerriAI/litellm/issues/7637
Bedrock
- Each issue needs atleast 1 unit test + 1 e2e test
5
Bedrock latency-optimized inference support
https://github.com/BerriAI/litellm/issues/7606
Bedrock
6
[Bug]: Invalid parameters being sent in Amazon Titan requests
https://github.com/BerriAI/litellm/issues/7548
BedrockComponent
Issues Completed
7
awsume interoperability support
https://github.com/BerriAI/litellm/issues/7526
BedrockBedrock
8
Usage not reported on Bedrock rerank endpoints
https://github.com/BerriAI/litellm/issues/7258
BedrockLangfuse
9
[Bug]: Bedrock Token Usage Reporting Streaming vs. Non-Streaming
https://github.com/BerriAI/litellm/issues/7112
Bedrock
Logging/Spend Tracking
10
[Bug]: The wrong number of toolUse blocks when using AWS Bedrock
https://github.com/BerriAI/litellm/issues/7099
Bedrock
Structured Output
11
[Feature]: Log Amazon Bedrock response headers
https://github.com/BerriAI/litellm/issues/6409
Bedrock
Service Availability
12
[Bug]: Litellm reports 500 instead of 400 when making call to anthropic.claude-3-haiku for vision with unsupported type
https://github.com/BerriAI/litellm/issues/6204
BedrockOpenAI
13
[Bug]: Can't send image content blocks in AWS Bedrock via anthropic /v1/messages endpoint
https://github.com/BerriAI/litellm/issues/5911
BedrockAnthropic
14
[Bug]: AWS STS credentials not cached
https://github.com/BerriAI/litellm/issues/5142
Bedrock
15
Support cost mapping for OpenAI-compatible API
https://github.com/BerriAI/litellm/issues/5008
Logging/Spend Tracking
16
[Bug]: When using LiteLLM Proxy with tool calling, Autogen and AWS Bedrock Claude, Bedrock errors when content fields are empty
https://github.com/BerriAI/litellm/issues/4820
Bedrock
17
[Feature]: Support Function Calling for Mistral Bedrock models
https://github.com/BerriAI/litellm/issues/3166
Bedrock
18
[Bug]: Unable to Pass User ID to Langfuse via LiteLLM Key Metadat
https://github.com/BerriAI/litellm/issues/8355
Langfuse
19
logging not working for self hosted llama
https://github.com/BerriAI/litellm/issues/8049
Logging/Spend Tracking
20
[Bug]: Langfuse Callbacks not executed when using context window fallback dict
https://github.com/BerriAI/litellm/issues/8014
Langfuse
21
[Bug]: Langfuse integration: error generated in logging handler - dictionary changed size during iteration
https://github.com/BerriAI/litellm/issues/7675
Langfuse
22
[Feature]: Integrating user information generated by LiteLLM with Langfuse
https://github.com/BerriAI/litellm/issues/7238
Langfuse
23
[Feature]: support langfuse Multi-Modality and Attachments
https://github.com/BerriAI/litellm/issues/6853
Langfuse
24
[Bug]: Race condition: Wrong trace_id sent to Langfuse when Redis caching is enabled
https://github.com/BerriAI/litellm/issues/6783
Langfuse
25
[Feature]: Support for Configurable Langfuse Trace and Generation Parameters in Config.yam
https://github.com/BerriAI/litellm/issues/6756
Langfuse
26
[Bug]: Langfuse exception after using content_policy_fallbacks
https://github.com/BerriAI/litellm/issues/6631
Langfuse
27
[Bug]: Adding a Langfuse Logging: UI is missing LANGFUSE_HOST, so a self-hosted langfuse setup not possible in one step
https://github.com/BerriAI/litellm/issues/6450
Langfuse
28
[Bug]: Langfuse not working using litellm client through LiteLLM Proxy
https://github.com/BerriAI/litellm/issues/6423
Langfuse
29
[Feature]: Return 'trace_id' for failed requests
https://github.com/BerriAI/litellm/issues/6568
Langfuse
30
[Feature]: langfuse_trace_metadata for rerank enpoints
https://github.com/BerriAI/litellm/issues/6321
Langfuse
31
[Bug]: Langfuse analytics-python queue is full
https://github.com/BerriAI/litellm/issues/5934
Langfuse
32
[Feature]: Support N Choices In Langfuse
https://github.com/BerriAI/litellm/issues/4964
Langfuse
33
[Feature]: Streaming prompt errors should log partial conten
https://github.com/BerriAI/litellm/issues/4605
Langfuse
34
[Bug]: cache_hit for embedding result is still considered to cost more than $0
https://github.com/BerriAI/litellm/issues/3762
Langfuse
35
[Bug]: logprobs missing from langfuse
https://github.com/BerriAI/litellm/issues/3254
Langfuse
36
[Bug]: API Keys shown in debug mode
https://github.com/BerriAI/litellm/issues/7603
Security
37
[Feature]: Add various security headers
https://github.com/BerriAI/litellm/issues/3677
Security
38
[Bug]: zero usage returned for streaming from completions api of litellm proxy server #
https://github.com/BerriAI/litellm/issues/8349
OpenAI
39
Regression with response structure from Anthropic models
https://github.com/BerriAI/litellm/issues/8291
Structured Output
40
[Bug]: Loop when background_health_checks is set to true
https://github.com/BerriAI/litellm/issues/8248
Service Availability
41
[Bug]: Invalid metadata when calling OpenAI models
https://github.com/BerriAI/litellm/issues/8209
OpenAI
42
[Bug]: litellm.drop_params=True does not remove temperature for o3-mini
https://github.com/BerriAI/litellm/issues/8192
OpenAI
43
[Bug]: Inconsistent stream output between OpenAI and LiteLLM clients during tool calling
https://github.com/BerriAI/litellm/issues/8012
OpenAI
44
[Bug]: logprobs not included in suggestion response
https://github.com/BerriAI/litellm/issues/7974
OpenAI
45
[Feature]: Support for Anthropic Citations API
https://github.com/BerriAI/litellm/issues/7970
Anthropic
46
text_completion output issues
https://github.com/BerriAI/litellm/issues/7947
OpenAI
47
[Bug]: Some small inconsistencies in LiteLLM_SpendLogs -> api_base found
https://github.com/BerriAI/litellm/issues/7317
Logging/Spend Tracking
48
[Bug]: Infinite loop when checking get_openai_supported_params
https://github.com/BerriAI/litellm/issues/7185
Service Availability
49
[Bug]: WebSocket issues with Open AI Realtime API in the browser
https://github.com/BerriAI/litellm/issues/6825
OpenAI
50
[Feature]: Support Edition methods for Image Generation Models
https://github.com/BerriAI/litellm/issues/6772
OpenAI
51
[Bug]: OpenAI v1/audio/transcriptions Invalid proxy server token passed. valid_token=None.","type":"auth_error","param":"None","code":"401"
https://github.com/BerriAI/litellm/issues/6638
OpenAI
52
[Bug]: OpenAI Embedding does not support modality parameter in `extra body.
https://github.com/BerriAI/litellm/issues/6525
OpenAI
53
https://github.com/BerriAI/litellm/issues/4417
https://github.com/BerriAI/litellm/issues/4417
Security
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100