Skip to content

Commit

Permalink
chore(deps): update container image docker.io/localai/localai to v2.1…
Browse files Browse the repository at this point in the history
…9.3 by renovate (#24494)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-aio-cpu` -> `v2.19.3-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-aio-gpu-nvidia-cuda-11` ->
`v2.19.3-aio-gpu-nvidia-cuda-11` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-aio-gpu-nvidia-cuda-12` ->
`v2.19.3-aio-gpu-nvidia-cuda-12` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-cublas-cuda11-ffmpeg-core` ->
`v2.19.3-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-cublas-cuda11-core` -> `v2.19.3-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-cublas-cuda12-ffmpeg-core` ->
`v2.19.3-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-cublas-cuda12-core` -> `v2.19.3-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2-ffmpeg-core` -> `v2.19.3-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.2` -> `v2.19.3` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.3`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.3)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.2...v2.19.3)

<!-- Release notes generated using configuration in .github/release.yml
at master -->

##### What's Changed

##### Bug fixes 🐛

- fix(gallery): do not attempt to delete duplicate files by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3031](https://togithub.com/mudler/LocalAI/pull/3031)
- fix(gallery): do clear out errors once displayed by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3033](https://togithub.com/mudler/LocalAI/pull/3033)

##### Exciting New Features 🎉

- feat(grammar): add llama3.1 schema by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3015](https://togithub.com/mudler/LocalAI/pull/3015)

##### 🧠 Models

- models(gallery): add llama3.1-claude by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3005](https://togithub.com/mudler/LocalAI/pull/3005)
- models(gallery): add darkidol llama3.1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3008](https://togithub.com/mudler/LocalAI/pull/3008)
- models(gallery): add gemmoy by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3009](https://togithub.com/mudler/LocalAI/pull/3009)
- chore: add function calling template for llama 3.1 models by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3010](https://togithub.com/mudler/LocalAI/pull/3010)
- chore: models(gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3013](https://togithub.com/mudler/LocalAI/pull/3013)
- models(gallery): add mistral-nemo by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3019](https://togithub.com/mudler/LocalAI/pull/3019)
- models(gallery): add llama3.1-8b-fireplace2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3018](https://togithub.com/mudler/LocalAI/pull/3018)
- models(gallery): add lumimaid-v0.2-12b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3020](https://togithub.com/mudler/LocalAI/pull/3020)
- models(gallery): add darkidol-llama-3.1-8b-instruct-1.1-uncensored-iq…
by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3021](https://togithub.com/mudler/LocalAI/pull/3021)
- models(gallery): add meta-llama-3.1-8b-instruct-abliterated by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3022](https://togithub.com/mudler/LocalAI/pull/3022)
- models(gallery): add llama-3.1-70b-japanese-instruct-2407 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3023](https://togithub.com/mudler/LocalAI/pull/3023)
- models(gallery): add llama-3.1-8b-instruct-fei-v1-uncensored by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3024](https://togithub.com/mudler/LocalAI/pull/3024)
- models(gallery): add openbuddy-llama3.1-8b-v22.1-131k by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3025](https://togithub.com/mudler/LocalAI/pull/3025)
- models(gallery): add lumimaid-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3026](https://togithub.com/mudler/LocalAI/pull/3026)
- models(gallery): add llama3 with enforced functioncall with grammars
by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3027](https://togithub.com/mudler/LocalAI/pull/3027)
- chore(model-gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3036](https://togithub.com/mudler/LocalAI/pull/3036)

##### 👒 Dependencies

- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3003](https://togithub.com/mudler/LocalAI/pull/3003)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3012](https://togithub.com/mudler/LocalAI/pull/3012)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3016](https://togithub.com/mudler/LocalAI/pull/3016)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3030](https://togithub.com/mudler/LocalAI/pull/3030)
- chore: ⬆️ Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3029](https://togithub.com/mudler/LocalAI/pull/3029)
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3034](https://togithub.com/mudler/LocalAI/pull/3034)

##### Other Changes

- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/3002](https://togithub.com/mudler/LocalAI/pull/3002)
- refactor: break down json grammar parser in different files by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/3004](https://togithub.com/mudler/LocalAI/pull/3004)
- fix: PR title tag for checksum checker script workflow by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/3014](https://togithub.com/mudler/LocalAI/pull/3014)

**Full Changelog**:
mudler/LocalAI@v2.19.2...v2.19.3

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC44LjMiLCJ1cGRhdGVkSW5WZXIiOiIzOC44LjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIiLCJsYWJlbHMiOlsiYXV0b21lcmdlIiwidXBkYXRlL2RvY2tlci9nZW5lcmFsL25vbi1tYWpvciJdfQ==-->
  • Loading branch information
truecharts-admin authored Jul 28, 2024
1 parent d3193b2 commit e9f25aa
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 11 deletions.
4 changes: 2 additions & 2 deletions charts/stable/local-ai/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ annotations:
truecharts.org/min_helm_version: "3.11"
truecharts.org/train: stable
apiVersion: v2
appVersion: 2.19.2
appVersion: 2.19.3
dependencies:
- name: common
version: 24.1.5
Expand All @@ -33,4 +33,4 @@ sources:
- https://github.com/truecharts/charts/tree/master/charts/stable/local-ai
- https://hub.docker.com/r/localai/localai
type: application
version: 11.11.4
version: 11.11.5
18 changes: 9 additions & 9 deletions charts/stable/local-ai/values.yaml
Original file line number Diff line number Diff line change
@@ -1,39 +1,39 @@
image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2@sha256:049db2ad15fd82df1b3f2e932f607d07e1cd9449d6821dbeb33601d908418403
tag: v2.19.3@sha256:0cb22d95e97831288d3b5401bf59ec041c476f1dfca2157b4e13bb22de4156b0
ffmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-ffmpeg-core@sha256:cedf339779f3dec9f58d6033bc76c4ddd2581adcb942eec8e3e20ad85fff70b2
tag: v2.19.3-ffmpeg-core@sha256:f7005e8a3371ad4cba3a2b40595ed1d405f6a0b22cf96135030923dc272d67db
cublasCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-cublas-cuda12-core@sha256:368ab8bdf7a48f9d6b77720a823aa57ee7c971d64425ab98fc902c859b567a91
tag: v2.19.3-cublas-cuda12-core@sha256:fd29bc6c4c43a326b9128104594400897c50f3f2ad95e7d16180fe1771c58d83
cublasCuda12FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-cublas-cuda12-ffmpeg-core@sha256:ffcef33926bb24f31aefe70bba84932968c509bd11a97ca21c1d178a793b8539
tag: v2.19.3-cublas-cuda12-ffmpeg-core@sha256:c4feed027ae449a6dd069a718e6b82cdb8b665935d228cf5e2c5c7973c1a34c6
cublasCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-cublas-cuda11-core@sha256:4757d5eb650254ee0cd4bbb8edaffe19e522a0d2eb2589cf87df70c44b88fe2d
tag: v2.19.3-cublas-cuda11-core@sha256:8297c5f353f19bbc58dde3d1adf2d5d829c779e3bf74a482b729a4833fc9219b
cublasCuda11FfmpegImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-cublas-cuda11-ffmpeg-core@sha256:9a571aa7aab8c182aa9a8f8583dbed78a29f4ff81ebc90ab7eff57a330c5b467
tag: v2.19.3-cublas-cuda11-ffmpeg-core@sha256:4bd482bd5275d8fe31c948f831f4fcd6063513a6ebbb5dd7d35de415e3874fdf
allInOneCuda12Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-aio-gpu-nvidia-cuda-12@sha256:22734d5b39f10fa5463c67b69d36e75d81acf39b71cd06a5f29342ddc66f8c13
tag: v2.19.3-aio-gpu-nvidia-cuda-12@sha256:87b6ce3d254084aedf6c015a1cc668e18f607eac762e5a582f6960b65c407e39
allInOneCuda11Image:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-aio-gpu-nvidia-cuda-11@sha256:f27dcc1040654028b8314eed6c548b84b8d1e55bc2a2ff17923a15cc8e15b237
tag: v2.19.3-aio-gpu-nvidia-cuda-11@sha256:c0f529e60b371f2442e6cb965085773a5e838a5a86ef16d439cef68f86537b60
allInOneCpuImage:
repository: docker.io/localai/localai
pullPolicy: IfNotPresent
tag: v2.19.2-aio-cpu@sha256:e272ca3b42eaa902d1a4fd521df5a01b1bbb62dd072a66ad6a5eab01e32c0b8c
tag: v2.19.3-aio-cpu@sha256:0d6d6cb8366f92276d1eab8fe5b7a31346e6c6b2adfa7407aed5530d54d42898
securityContext:
container:
runAsNonRoot: false
Expand Down

0 comments on commit e9f25aa

Please sign in to comment.