Compare commits

..

16 Commits

Author SHA1 Message Date
copilot-swe-agent[bot] 4a30a697f1 check-requirements: update existing comment in place instead of delete+recreate
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/0c57df2f-81a3-4ab1-9343-465523db657f

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-29 06:24:07 +00:00
copilot-swe-agent[bot] 0f738ce5b0 check-requirements: add workflow_dispatch trigger, deduplicate comment on each run
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/7ca80b22-68f1-4a3b-ad94-2d4c054ac0f0

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-24 14:57:03 +00:00
Robert Resch 93a8f4d94e Apply suggestions from code review
Co-authored-by: Robert Resch <robert@resch.dev>
2026-04-24 01:06:21 +02:00
copilot-swe-agent[bot] 3303339797 check-requirements: move overall summary line to top of comment (before table)
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/9228f3c1-ac84-42f9-aed1-c8c6156cef03

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-23 23:03:28 +00:00
copilot-swe-agent[bot] eacfd0ce50 check-requirements: use icon-only table, add collapsible per-package detail sections
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/5314c056-b511-48aa-bace-bb9c43fac637

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-23 22:59:18 +00:00
Robert Resch 609f935430 Recompile 2026-04-23 22:19:29 +00:00
copilot-swe-agent[bot] 30151a484b Exclude auto-generated lock file from prettier check
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/0ab3b575-2a57-48e7-a15f-cd55aa410f41

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-23 22:15:44 +00:00
copilot-swe-agent[bot] df1cf178e8 Exclude auto-generated lock file from yamllint and zizmor checks
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/f728bfbc-371b-44a3-bce9-3ecdc9cce4fb

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-23 09:23:26 +00:00
copilot-swe-agent[bot] fe2214e071 check-requirements: tighten step 4a, add public-repo check, always comment
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/dc04d9a1-1c24-4abd-8379-58a473ba3f25

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 15:17:44 +00:00
copilot-swe-agent[bot] 36488d5d26 Revert "Add PyPI wheel availability info output to hassfest requirements check"
This reverts commit 4a895255d6.

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 14:14:35 +00:00
copilot-swe-agent[bot] 4a895255d6 Add PyPI wheel availability info output to hassfest requirements check
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/846855d5-b238-485c-ad9c-9def58ab5de5

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 13:51:58 +00:00
copilot-swe-agent[bot] b3075ecc9b Restore forks trigger; generalize release pipeline check to GitLab and other hosts
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/774c8674-5a55-4b8c-a48c-44ebfe4ca73d

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 13:28:56 +00:00
copilot-swe-agent[bot] fdfe4365a1 Restrict workflow to non-fork PRs; add PyPI CI-upload and release pipeline sanity checks
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/a9e9c91a-f16e-4237-8693-f301733062a3

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 13:17:15 +00:00
copilot-swe-agent[bot] 5043c8b87d Expand requirements check: test deps, repo-specific link validation, diff consistency
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/552c9b6d-5829-411f-b3cd-a86c7ffb7ac7

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 12:56:08 +00:00
copilot-swe-agent[bot] 7864a661e1 Fix duplicate .gitattributes entry for lock files
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/175e4dde-73f0-4164-bf5f-7a839518bf1f

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 12:39:13 +00:00
copilot-swe-agent[bot] aa40340068 Add agentic workflow to check requirements licenses and PR description links
Agent-Logs-Url: https://github.com/home-assistant/core/sessions/175e4dde-73f0-4164-bf5f-7a839518bf1f

Co-authored-by: edenhaus <26537646+edenhaus@users.noreply.github.com>
2026-04-22 12:38:07 +00:00
1301 changed files with 15846 additions and 53712 deletions
+4 -5
View File
@@ -27,13 +27,12 @@ description: Reviews GitHub pull requests and provides feedback comments. This i
- No need to highlight things that are already good.
## Output format:
- List specific comments for each file/line that needs attention.
- List specific comments for each file/line that needs attention
- In the end, summarize with an overall assessment (approve, request changes, or comment) and bullet point list of changes suggested, if any.
- Example output:
```
Overall assessment: request changes.
- [CRITICAL] sensor.py:143 - Memory leak
- [PROBLEM] data_processing.py:87 - Inefficient algorithm
- [SUGGESTION] test_init.py:45 - Improve x variable name
- [CRITICAL] Memory leak in homeassistant/components/sensor/my_sensor.py:143
- [PROBLEM] Inefficient algorithm in homeassistant/helpers/data_processing.py:87
- [SUGGESTION] Improve variable naming in homeassistant/helpers/config_validation.py:45
```
- Make sure to include the file and line number when possible in the bullet points.
@@ -1,5 +1,5 @@
---
name: ha-integration-knowledge
name: Home Assistant Integration knowledge
description: Everything you need to know to build, test and review Home Assistant Integrations. If you're looking at an integration, you must use this as your primary reference.
---
@@ -14,7 +14,6 @@ description: Everything you need to know to build, test and review Home Assistan
- Do NOT allow users to set config entry names in config flows. Names are automatically generated or can be customized later in UI. Exception: helper integrations may allow custom names.
- For entity actions and entity services, avoid requesting redundant defensive checks for fields already enforced by Home Assistant validation schemas and entity filters; only request extra guards when values bypass validation or are transformed unsafely.
- When validation guarantees a key is present, prefer direct dictionary indexing (`data["key"]`) over `.get("key")` so invalid assumptions fail fast.
- Integrations should be thin wrappers. Protocol parsing, device state machines, or other domain logic belong in a separate PyPI library, not in the integration itself. If unsure, ask before inlining.
The following platforms have extra guidelines:
- **Diagnostics**: [`platform-diagnostics.md`](platform-diagnostics.md) for diagnostic data collection
-1
View File
@@ -36,7 +36,6 @@ base_platforms: &base_platforms
- homeassistant/components/image_processing/**
- homeassistant/components/infrared/**
- homeassistant/components/lawn_mower/**
- homeassistant/components/radio_frequency/**
- homeassistant/components/light/**
- homeassistant/components/lock/**
- homeassistant/components/media_player/**
+1
View File
@@ -23,3 +23,4 @@ requirements_all.txt linguist-generated=true
requirements_test_all.txt linguist-generated=true
requirements_test_pre_commit.txt linguist-generated=true
script/hassfest/docker/Dockerfile linguist-generated=true
.github/workflows/*.lock.yml linguist-generated=true merge=ours
+7 -2
View File
@@ -5,7 +5,7 @@
# Copilot code review instructions
- Start review comments with a short, one-sentence summary of the suggested fix.
- Do not comment on code style, formatting or linting issues.
- Do not add comments about code style, formatting or linting issues.
# GitHub Copilot & Claude Code Instructions
@@ -21,7 +21,7 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
## Python Syntax Notes
- Python 3.14 explicitly allows `except TypeA, TypeB:` without parentheses. Never flag this as an issue since Home Assistant officially supports Python 3.14.
- Python 3.14 explicitly allows `except TypeA, TypeB:` without parentheses.
## Testing
@@ -34,3 +34,8 @@ Integrations with Platinum or Gold level in the Integration Quality Scale reflec
When reviewing entity actions, do not suggest extra defensive checks for input fields that are already validated by Home Assistant's service/action schemas and entity selection filters. Suggest additional guards only when data bypasses those validators or is transformed into a less-safe form.
When validation guarantees a dict key exists, prefer direct key access (`data["key"]`) instead of `.get("key")` so contract violations are surfaced instead of silently masked.
# Skills
- Home Assistant Integration knowledge: .claude/skills/integrations/SKILL.md
@@ -1,46 +0,0 @@
---
applyTo: "homeassistant/components/**, tests/components/**"
excludeAgent: "cloud-agent"
---
<!-- Automatically generated by gen_copilot_instructions.py, do not edit -->
## File Locations
- **Integration code**: `./homeassistant/components/<integration_domain>/`
- **Integration tests**: `./tests/components/<integration_domain>/`
## General guidelines
- When looking for examples, prefer integrations with the platinum or gold quality scale level first.
- Polling intervals are NOT user-configurable. Never add scan_interval, update_interval, or polling frequency options to config flows or config entries.
- Do NOT allow users to set config entry names in config flows. Names are automatically generated or can be customized later in UI. Exception: helper integrations may allow custom names.
- For entity actions and entity services, avoid requesting redundant defensive checks for fields already enforced by Home Assistant validation schemas and entity filters; only request extra guards when values bypass validation or are transformed unsafely.
- When validation guarantees a key is present, prefer direct dictionary indexing (`data["key"]`) over `.get("key")` so invalid assumptions fail fast.
- Integrations should be thin wrappers. Protocol parsing, device state machines, or other domain logic belong in a separate PyPI library, not in the integration itself. If unsure, ask before inlining.
The following platforms have extra guidelines:
- **Diagnostics**: [`platform-diagnostics.md`](platform-diagnostics.md) for diagnostic data collection
- **Repairs**: [`platform-repairs.md`](platform-repairs.md) for user-actionable repair issues
## Integration Quality Scale
- When validating the quality scale rules, check them at https://developers.home-assistant.io/docs/core/integration-quality-scale/rules
- When implementing or reviewing an integration, always consider the quality scale rules, since they promote best practices.
Template scale file: `./script/scaffold/templates/integration/integration/quality_scale.yaml`
### How Rules Apply
1. **Check `manifest.json`**: Look for `"quality_scale"` key to determine integration level
2. **Bronze Rules**: Always required for any integration with quality scale
3. **Higher Tier Rules**: Only apply if integration targets that tier or higher
4. **Rule Status**: Check `quality_scale.yaml` in integration folder for:
- `done`: Rule implemented
- `exempt`: Rule doesn't apply (with reason in comment)
- `todo`: Rule needs implementation
## Testing Requirements
- Tests should avoid interacting or mocking internal integration details. For more info, see https://developers.home-assistant.io/docs/development_testing/#writing-tests-for-integrations
+2 -3
View File
@@ -6,7 +6,7 @@
"pep621",
"pip_requirements",
"pre-commit",
"custom.regex",
"regex",
"homeassistant-manifest"
],
@@ -27,9 +27,8 @@
]
},
"customManagers": [
"regexManagers": [
{
"customType": "regex",
"description": "Update ruff required-version in pyproject.toml",
"managerFilePatterns": ["/^pyproject\\.toml$/"],
"matchStrings": ["required-version = \">=(?<currentValue>[\\d.]+)\""],
File diff suppressed because it is too large Load Diff
+402
View File
@@ -0,0 +1,402 @@
---
on:
pull_request:
types: [opened, synchronize, reopened]
paths:
- "requirements*.txt"
- "homeassistant/package_constraints.txt"
- "pyproject.toml"
forks: ["*"]
workflow_dispatch:
inputs:
pull_request_number:
description: "Pull request number to (re-)check"
required: true
type: number
permissions:
contents: read
pull-requests: read
issues: read
network:
allowed:
- python
tools:
web-fetch: {}
github:
toolsets: [default]
safe-outputs:
add-comment:
max: 1
description: >
Checks changed Python package requirements on PRs targeting the core repo
(including fork PRs): verifies licenses match PyPI metadata, source
repositories are publicly accessible, PyPI releases were uploaded via
automated CI (Trusted Publisher attestation), the package's release pipeline
uses OIDC or equivalent automated credentials (not static tokens), and the PR
description contains the required links.
---
# Requirements License and Availability Check
You are a code review assistant for the Home Assistant project. Your job is to
review changes to Python package requirements and verify they meet the project's
standards.
## Context
- Home Assistant uses `requirements_all.txt` (all integration packages),
`requirements.txt` (core packages), `requirements_test.txt` (test
dependencies), and `requirements_test_all.txt` (all test dependencies) to
declare Python dependencies.
- Each integration lists its packages in `homeassistant/components/<name>/manifest.json`
under the `requirements` field.
- Allowed licenses are maintained in `script/licenses.py` under
`OSI_APPROVED_LICENSES_SPDX` (SPDX identifiers) and `OSI_APPROVED_LICENSES`
(classifier strings).
## Step 1 — Identify Changed Packages
Use the GitHub tool to fetch the PR diff. Look for lines that were added (`+`)
or removed (`-`) in **all** of these files:
- `requirements.txt`
- `requirements_all.txt`
- `requirements_test.txt`
- `requirements_test_all.txt`
- `homeassistant/package_constraints.txt`
- `pyproject.toml`
For each changed line that contains a package pin (e.g. `SomePackage==1.2.3`),
classify it as:
- **New package**: the package name appears only in `+` lines, with no
corresponding `-` line for the same package name.
- **Version bump**: the same package name appears in both `+` lines (new
version) and `-` lines (old version), with different version numbers.
Record the **old version** and **new version** for every version bump — you
will need these values in Step 4.
Ignore comment lines (starting with `#`), lines that start with `-r ` (file
includes), and lines that don't contain `==`.
## Step 2 — Check License via PyPI
For each new or bumped package:
1. Fetch `https://pypi.org/pypi/{package_name}/json` (use the exact
package name as it appears on PyPI).
2. From the JSON response, extract:
- `info.license` — free-text license field
- `info.license_expression` — SPDX expression (if present)
- `info.classifiers` — filter for entries starting with `"License ::"`.
3. Determine if the license is in the approved list from `script/licenses.py`:
- SPDX identifiers: compare against `OSI_APPROVED_LICENSES_SPDX`
- Classifier strings: compare against `OSI_APPROVED_LICENSES`
4. Flag a package as ❌ if the license is unknown, missing, or not in the
approved list. Flag as ⚠️ if the license information is ambiguous or cannot
be definitively determined.
## Step 2b — Verify PyPI Release Was Uploaded by CI
For each new or bumped package, verify that the release on PyPI was published
automatically by a CI pipeline (via OIDC Trusted Publisher), not uploaded
manually.
1. Fetch the PyPI JSON for the specific version being introduced or bumped:
`https://pypi.org/pypi/{package_name}/{version}/json`
2. Inspect the `urls` array in the response. For each distribution file (wheel
or sdist), note the filename.
3. For each filename, attempt to fetch the PyPI provenance attestation:
`https://pypi.org/integrity/{package_name}/{version}/{filename}/provenance`
- If the response is HTTP 200 and contains a valid attestation object,
inspect `attestation_bundles[*].publisher`. A Trusted Publisher attestation
will have a `kind` identifying the CI system (e.g. `"GitHub Actions"`,
`"GitLab"`) and a `repository` or `project` field matching the source
repository.
- If at least one distribution file has a valid Trusted Publisher attestation,
mark ✅ CI-uploaded.
- If no attestation is found for any file (404 for all), mark ❌ — "Release
has no provenance attestation; it may have been uploaded manually".
- If an attestation exists but the `publisher` does not identify a recognized
CI system or Trusted Publisher, mark ⚠️ — "Attestation present but
publisher cannot be verified as automated CI".
Note: if PyPI returns an error fetching the per-version JSON, fall back to the
latest JSON (`https://pypi.org/pypi/{package_name}/json`) and look up the
specific version in the `releases` dict.
## Step 3 — Check Repository Availability
For each new or bumped package:
1. From the PyPI JSON at `info.project_urls`, find the source repository URL
(keys such as `"Source"`, `"Homepage"`, `"Repository"`, or `"Source Code"`).
2. Use web-fetch to perform a GET request to the repository URL.
3. If the response returns HTTP 200 and the page is publicly accessible, mark ✅.
4. If the URL is missing, returns a non-200 status, or redirects to a login
page, mark ❌ with a note that the repository could not be verified as public.
## Step 4 — Check PR Description
Read the PR body from the GitHub API using the PR number `${{ github.event.pull_request.number }}`.
Extract all URLs present in the PR body.
### 4a — New packages: repository link required
For **new packages** (brand-new dependency not previously in any requirements
file): the PR description must contain a link that points to the package's
**source repository** as identified in Step 3 (the URL recorded from
`info.project_urls`). A PyPI page link alone is **not** acceptable — the link
must point directly to the source repository (e.g. a GitHub or GitLab URL).
- If a URL in the PR body matches (or is a sub-path of) the source repository
URL identified via PyPI, mark ✅.
- If the PR body contains a source repository URL that does **not** match the
repository URL found in the package's PyPI metadata (`info.project_urls`),
mark ❌ — "PR description links to `<pr_url>` but PyPI reports the source
repository as `<pypi_repo_url>`; please use the correct repository URL."
- If no source repository URL is present in the PR body at all, mark ❌ —
"PR description must link to the source repository at `<repo_url>` (found
via PyPI). A PyPI page link is not sufficient."
### 4b — Version bumps: changelog or diff link required
For **version bumps**: the PR description must contain a link to a changelog,
release notes page, or a diff/comparison URL that references the **correct
versions** being bumped (old → new).
Checks to perform for each bumped package (old version = X, new version = Y):
1. Extract all URLs from the PR body that contain the repository's domain or
path (as identified in Step 3).
2. Verify that at least one such URL includes both the old version string and
new version string in some form — e.g. a GitHub compare URL like
`compare/vX...vY`, a releases URL mentioning version Y, or a
`CHANGELOG.md` anchor referencing Y.
3. If no URL matches, check if the PR body contains any changelog/diff link at
all for this package.
Outcome:
- ✅ — a URL pointing to the correct repo with version references covering the
exact bump (X → Y).
- ⚠️ — a changelog/diff link exists but does not clearly reference the correct
versions or the correct repository; explain what was found and what is
expected.
- ❌ — no changelog or diff link found at all in the PR description for this
package.
### 4c — Diff consistency check
For each **version bump**, verify that the version change recorded in the diff
(Step 1) is internally consistent:
- The `-` line must contain the old version and the `+` line must contain the
new version for the same package name.
- Flag ❌ if the diff shows a downgrade (new version < old version) without an
explanation, or if the version strings cannot be parsed.
## Step 5 — Verify Source Repository is Publicly Accessible
Before inspecting the release pipeline, confirm that the source repository
identified in Step 3 is publicly reachable.
For each new or bumped package:
1. Use the source repository URL recorded in Step 3.
2. If no repository URL was found in `info.project_urls`, mark ❌ — "No source
repository URL found in PyPI metadata; a public source repository is
required."
3. If a repository URL was found, perform a GET request to that URL (using
web-fetch). If the response is HTTP 200 and returns a publicly accessible
page (not a login redirect or error page), mark ✅.
4. If the response is non-200, the URL redirects to a login/authentication page,
or the repository appears private or unavailable, mark ❌ — "Source
repository at `<repo_url>` is not publicly accessible. Home Assistant
requires all dependencies to have publicly available source code." **Do not
proceed with the release pipeline check (Step 6) for this package.**
## Step 6 — Check Release Pipeline Sanity
For each new or bumped package, determine the source repository host from the
URL identified in Step 3, then inspect whether the project's release/publish CI
workflow is sane. The checks differ by hosting provider.
### GitHub repositories (`github.com`)
1. Using the GitHub API, list the workflows in the source repository:
`GET /repos/{owner}/{repo}/actions/workflows`
2. Identify any workflow whose name or filename suggests publishing to PyPI
(e.g., contains "release", "publish", "pypi", or "deploy").
3. Fetch the workflow file content and check the following:
a. **Trigger sanity**: The publish job should be triggered by `push` to tags,
`release: published`, or `workflow_run` on a release job — **not** solely
by `workflow_dispatch` with no additional guards. A `workflow_dispatch`
trigger alongside other triggers is acceptable. Mark ❌ if the only trigger
is manual `workflow_dispatch` with no environment protection rules.
b. **OIDC / Trusted Publisher**: The workflow should use OIDC-based publishing.
Look for `id-token: write` permission and one of:
- `pypa/gh-action-pypi-publish` action
- `actions/attest-build-provenance` action
- Any step that sets `TWINE_PASSWORD` from `secrets.PYPI_TOKEN` directly
(flag ❌ if a long-lived API token is used instead of OIDC).
Mark ✅ if OIDC is used, ⚠️ if the publish method cannot be determined,
❌ if a static secret token is the only credential.
c. **No manual upload bypass**: Verify there is no step that calls
`twine upload` or `pip upload` outside of a properly gated job (e.g., one
that requires an environment approval). Flag ⚠️ if such steps exist.
4. If no publish workflow is found in the repository, mark ⚠️ — "No publish
workflow found; it is unclear how this package is released to PyPI."
### GitLab repositories (`gitlab.com` or self-hosted GitLab)
1. Use the GitLab REST API to list CI/CD pipeline configuration files. First
resolve the project ID via
`GET https://gitlab.com/api/v4/projects/{url-encoded-namespace-and-name}`
and note the `id` field.
2. Fetch the repository's `.gitlab-ci.yml` (and any included files) using
`GET https://gitlab.com/api/v4/projects/{id}/repository/files/.gitlab-ci.yml/raw?ref=HEAD`
(use web-fetch for public repos).
3. Identify any job whose name or `stage` suggests publishing to PyPI
(e.g., "publish", "deploy", "release", "pypi").
4. For each such job, check:
a. **Trigger sanity**: The job should run only on tag pipelines (`only: tags`
or `rules: - if: $CI_COMMIT_TAG`) or on protected branches — **not**
solely on manual triggers (`when: manual`) with no additional protection.
Mark ❌ if the only trigger is manual with no environment or protected-branch
guard.
b. **Automated credentials**: The job should use GitLab's OIDC ID token
(`id_tokens:` block) and `pypa/gh-action-pypi-publish` equivalent, or
reference `secrets.PYPI_TOKEN` / `$PYPI_TOKEN` injected from GitLab CI/CD
protected variables (flag ❌ if the token is hard-coded or unprotected).
Mark ✅ if OIDC or protected CI variables are used, ⚠️ if the method
cannot be determined, ❌ if credentials appear to be insecure.
c. **No manual upload bypass**: Flag ⚠️ if any job calls `twine upload`
without being behind a protected-variable or environment guard.
5. If no publish job is found, mark ⚠️ — "No publish job found in .gitlab-ci.yml;
it is unclear how this package is released to PyPI."
### Other code hosting providers
For repositories hosted on platforms other than GitHub or GitLab (e.g.,
Bitbucket, Codeberg, Gitea, Sourcehut):
1. Use web-fetch to retrieve the repository's root page and look for any
publicly visible CI configuration files (e.g., `.circleci/config.yml`,
`Jenkinsfile`, `azure-pipelines.yml`, `bitbucket-pipelines.yml`,
`.builds/*.yml` for Sourcehut).
2. Apply the same conceptual checks as above:
- Does publishing run on automated triggers (tags/releases), not solely
manual ones?
- Are credentials injected by the CI system (not hard-coded)?
- Is there a `twine upload` or equivalent step that could be run manually?
3. If no CI configuration can be retrieved, mark ⚠️ — "Release pipeline could
not be inspected; hosting provider is not GitHub or GitLab."
## Step 7 — Post a Review Comment
**Always** post a review comment using `add-comment`, regardless of whether
packages pass or fail. Use the following structure:
> **Note on deduplication**: The workflow automatically updates any previous
> requirements-check comment on the PR in place (preserving its position in the
> thread). If no previous comment exists, the newly created comment is kept as-is.
> You do not need to search for or update previous comments yourself.
### Comment structure
Begin every comment with the HTML marker `<!-- requirements-check -->` on its
own line (this is used by the workflow to find the previous comment and update
it on the next run).
### 7a — Overall summary line
Begin the comment with a single summary line, before anything else:
- If everything passed: `All requirements checks passed. ✅`
- If there are failures or warnings: `⚠️ Some checks require attention — see the details below.`
### 7b — Summary table
Render a compact table where every check column contains **only the status
icon** (✅, ⚠️, or ❌). No explanatory text belongs inside the table cells —
all detail goes in the per-package sections below.
Use `—` (em dash) when a check was skipped (e.g. Release Pipeline is skipped
when the repository is not publicly accessible).
```
<!-- requirements-check -->
## Requirements Check
| Package | Type | Old→New | License | Repo Public | CI Upload | Release Pipeline | PR Link | Diff Consistent |
|---------|------|---------|---------|-------------|-----------|------------------|---------|-----------------|
| PackageA | bump | 1.2.3→1.3.0 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| PackageB | new | —→4.5.6 | ❌ | ✅ | ❌ | ⚠️ | ❌ | ✅ |
| PackageC | bump | 2.0.0→2.1.0 | ✅ | ❌ | — | — | ⚠️ | ✅ |
```
### 7c — Per-package detail sections
After the table, add one collapsible `<details>` block per package.
- If **all checks passed** for that package, render the block **collapsed**
(no `open` attribute) so the comment stays concise.
- If **any check failed or produced a warning**, render the block **open**
(`<details open>`) so the contributor sees the issues immediately.
Each block must include the full detail for every check: the license found, the
repository URL, whether a provenance attestation was found, the release
pipeline findings, the PR link found (or missing), and whether the diff is
consistent. For failed or warned checks, explain exactly what the contributor
must fix, including the expected source repository URL, expected version range,
etc.
Template (repeat for each package):
```
<details open>
<summary><strong>PackageB 📦 new —→4.5.6</strong></summary>
- **License**: ❌ License is `UNKNOWN` — not in the approved list. Check PyPI metadata and `script/licenses.py`.
- **Repository Public**: ✅ https://github.com/example/packageb is publicly accessible.
- **CI Upload**: ❌ No provenance attestation found for any distribution file. The release may have been uploaded manually.
- **Release Pipeline**: ⚠️ No publish workflow found in the repository; it is unclear how this package is released to PyPI.
- **PR Link**: ❌ PR description must link to the source repository at https://github.com/example/packageb (a PyPI page link is not sufficient).
- **Diff Consistent**: ✅
</details>
```
Collapsed example (all checks passed):
```
<details>
<summary><strong>PackageA 📦 bump 1.2.3→1.3.0</strong></summary>
- **License**: ✅ MIT
- **Repository Public**: ✅ https://github.com/example/packagea
- **CI Upload**: ✅ Trusted Publisher attestation found (GitHub Actions).
- **Release Pipeline**: ✅ OIDC via `pypa/gh-action-pypi-publish`; triggered on `release: published`; `environment: release` gate.
- **PR Link**: ✅ https://github.com/example/packagea/compare/v1.2.3...v1.3.0
- **Diff Consistent**: ✅
</details>
```
## Notes
- Be constructive and helpful. Provide direct links where possible so the
contributor can quickly fix the issue.
- If PyPI returns an error for a package, mention that it could not be found and
suggest the contributor verify the package name.
- For packages that only appear in `homeassistant/package_constraints.txt` or
`pyproject.toml` without being tied to a specific integration, the PR
description link requirement still applies.
- When checking test-only packages (from `requirements_test.txt` or
`requirements_test_all.txt`), apply the same license, repository, and PR
description checks as for production dependencies.
- A package that appears in both a production file and a test file should only
be reported once; use the production file entry as the canonical one.
- This workflow is only triggered when a commit actually changes one of the
tracked requirements files (for `synchronize` events GitHub compares the
before/after SHAs of the push, not the entire PR diff). Members can manually
retrigger the workflow via `workflow_dispatch` with the PR number to re-run
the check after updating the PR description or fixing issues without changing
any requirements files. On a retrigger the existing comment is updated in
place so there is always exactly one requirements-check comment in the PR.
+3 -29
View File
@@ -366,7 +366,7 @@ jobs:
echo "key=uv-${UV_CACHE_VERSION}-${uv_version}-${HA_SHORT_VERSION}-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore base Python virtual environment
id: cache-venv
uses: actions/cache/restore@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: >-
@@ -374,8 +374,7 @@ jobs:
needs.info.outputs.python_cache_key }}
- name: Restore uv wheel cache
if: steps.cache-venv.outputs.cache-hit != 'true'
id: cache-uv
uses: actions/cache/restore@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.UV_CACHE_DIR }}
key: >-
@@ -399,7 +398,6 @@ jobs:
if: |
steps.cache-venv.outputs.cache-hit != 'true'
|| steps.cache-apt-check.outputs.cache-hit != 'true'
id: install-os-deps
timeout-minutes: 10
env:
APT_CACHE_HIT: ${{ steps.cache-apt-check.outputs.cache-hit }}
@@ -433,10 +431,7 @@ jobs:
sudo chmod -R 755 ${APT_CACHE_BASE}
fi
- name: Save apt cache
if: |
always()
&& steps.cache-apt-check.outputs.cache-hit != 'true'
&& steps.install-os-deps.outcome == 'success'
if: steps.cache-apt-check.outputs.cache-hit != 'true'
uses: actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: |
@@ -446,7 +441,6 @@ jobs:
${{ runner.os }}-${{ runner.arch }}-${{ needs.info.outputs.apt_cache_key }}
- name: Create Python virtual environment
if: steps.cache-venv.outputs.cache-hit != 'true'
id: create-venv
run: |
python -m venv venv
. venv/bin/activate
@@ -477,26 +471,6 @@ jobs:
- name: Check dirty
run: |
./script/check_dirty
- name: Save uv wheel cache
if: |
(success() && steps.cache-venv.outputs.cache-hit != 'true')
|| (always()
&& steps.create-venv.outcome == 'success'
&& steps.cache-uv.outputs.cache-matched-key == '')
uses: actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.UV_CACHE_DIR }}
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-${{
steps.generate-uv-key.outputs.key }}
- name: Save base Python virtual environment
if: always() && steps.create-venv.outcome == 'success'
uses: actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: >-
${{ runner.os }}-${{ runner.arch }}-${{ steps.python.outputs.python-version }}-${{
needs.info.outputs.python_cache_key }}
hassfest:
name: Check hassfest
+3 -1
View File
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.12
rev: v0.15.10
hooks:
- id: ruff-check
args:
@@ -23,6 +23,7 @@ repos:
- id: zizmor
args:
- --pedantic
exclude: ^\.github/workflows/check-requirements\.lock\.yml$
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
@@ -46,6 +47,7 @@ repos:
additional_dependencies:
- prettier@3.6.2
- prettier-plugin-sort-json@4.2.0
exclude: ^\.github/workflows/check-requirements\.lock\.yml$
- repo: https://github.com/cdce8p/python-typing-update
rev: v0.6.0
hooks:
-1
View File
@@ -599,7 +599,6 @@ homeassistant.components.vallox.*
homeassistant.components.valve.*
homeassistant.components.velbus.*
homeassistant.components.velux.*
homeassistant.components.victron_gx.*
homeassistant.components.vivotek.*
homeassistant.components.vlc_telnet.*
homeassistant.components.vodafone_station.*
+1
View File
@@ -1,5 +1,6 @@
ignore: |
tests/fixtures/core/config/yaml_errors/
.github/workflows/check-requirements.lock.yml
rules:
braces:
level: error
+1 -1
View File
@@ -12,7 +12,7 @@ This repository contains the core of Home Assistant, a Python 3 based home autom
## Python Syntax Notes
- Python 3.14 explicitly allows `except TypeA, TypeB:` without parentheses. Never flag this as an issue since Home Assistant officially supports Python 3.14.
- Python 3.14 explicitly allows `except TypeA, TypeB:` without parentheses.
## Testing
Generated
+2 -10
View File
@@ -758,8 +758,6 @@ CLAUDE.md @home-assistant/core
/tests/components/homewizard/ @DCSBL
/homeassistant/components/honeywell/ @rdfurman @mkmer
/tests/components/honeywell/ @rdfurman @mkmer
/homeassistant/components/honeywell_string_lights/ @balloob
/tests/components/honeywell_string_lights/ @balloob
/homeassistant/components/hr_energy_qube/ @MattieGit
/tests/components/hr_energy_qube/ @MattieGit
/homeassistant/components/html5/ @alexyao2015 @tr4nt0r
@@ -1203,8 +1201,6 @@ CLAUDE.md @home-assistant/core
/tests/components/notify_events/ @matrozov @papajojo
/homeassistant/components/notion/ @bachya
/tests/components/notion/ @bachya
/homeassistant/components/novy_cooker_hood/ @piitaya
/tests/components/novy_cooker_hood/ @piitaya
/homeassistant/components/nrgkick/ @andijakl
/tests/components/nrgkick/ @andijakl
/homeassistant/components/nsw_fuel_station/ @nickw444
@@ -1241,8 +1237,6 @@ CLAUDE.md @home-assistant/core
/homeassistant/components/ollama/ @synesthesiam
/tests/components/ollama/ @synesthesiam
/homeassistant/components/ombi/ @larssont
/homeassistant/components/omie/ @luuuis
/tests/components/omie/ @luuuis
/homeassistant/components/onboarding/ @home-assistant/core
/tests/components/onboarding/ @home-assistant/core
/homeassistant/components/ondilo_ico/ @JeromeHXP
@@ -1421,8 +1415,6 @@ CLAUDE.md @home-assistant/core
/tests/components/radarr/ @tkdrob
/homeassistant/components/radio_browser/ @frenck
/tests/components/radio_browser/ @frenck
/homeassistant/components/radio_frequency/ @home-assistant/core
/tests/components/radio_frequency/ @home-assistant/core
/homeassistant/components/radiotherm/ @vinnyfuria
/tests/components/radiotherm/ @vinnyfuria
/homeassistant/components/rainbird/ @konikvranik @allenporter
@@ -1991,8 +1983,8 @@ CLAUDE.md @home-assistant/core
/tests/components/wled/ @frenck @mik-laj
/homeassistant/components/wmspro/ @mback2k
/tests/components/wmspro/ @mback2k
/homeassistant/components/wolflink/ @adamkrol93 @EnjoyingM
/tests/components/wolflink/ @adamkrol93 @EnjoyingM
/homeassistant/components/wolflink/ @adamkrol93 @mtielen
/tests/components/wolflink/ @adamkrol93 @mtielen
/homeassistant/components/workday/ @fabaff @gjohansson-ST
/tests/components/workday/ @fabaff @gjohansson-ST
/homeassistant/components/worldclock/ @fabaff
+1 -16
View File
@@ -2,8 +2,7 @@
from __future__ import annotations
from collections.abc import Callable, Iterable
from typing import TYPE_CHECKING
from collections.abc import Callable
import voluptuous as vol
@@ -14,9 +13,6 @@ from .models import PermissionLookup
from .types import PolicyType
from .util import test_all
if TYPE_CHECKING:
from ..models import User
POLICY_SCHEMA = vol.Schema({vol.Optional(CAT_ENTITIES): ENTITY_POLICY_SCHEMA})
__all__ = [
@@ -26,21 +22,10 @@ __all__ = [
"PermissionLookup",
"PolicyPermissions",
"PolicyType",
"filter_entity_ids_by_permission",
"merge_policies",
]
def filter_entity_ids_by_permission(
user: User, entity_ids: Iterable[str], key: str
) -> list[str]:
"""Filter entity IDs to those the user can access for the given policy key."""
if user.is_admin or user.permissions.access_all_entities(key):
return list(entity_ids)
check_entity = user.permissions.check_entity
return [entity_id for entity_id in entity_ids if check_entity(entity_id, key)]
class AbstractPermissions:
"""Default permissions class."""
+1 -1
View File
@@ -1,5 +1,5 @@
{
"domain": "honeywell",
"name": "Honeywell",
"integrations": ["lyric", "evohome", "honeywell", "honeywell_string_lights"]
"integrations": ["lyric", "evohome", "honeywell"]
}
+1 -1
View File
@@ -143,4 +143,4 @@ class AcaiaRestoreSensor(AcaiaEntity, RestoreSensor):
@property
def available(self) -> bool:
"""Return True if entity is available."""
return super().available or self.native_value is not None
return super().available or self._restored_data is not None
@@ -4,7 +4,7 @@ from __future__ import annotations
from asyncio import timeout
from collections.abc import Mapping
from typing import TYPE_CHECKING, Any
from typing import Any
from accuweather import AccuWeather, ApiError, InvalidApiKeyError, RequestsExceededError
from aiohttp import ClientError
@@ -12,7 +12,7 @@ from aiohttp.client_exceptions import ClientConnectorError
import voluptuous as vol
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.const import CONF_API_KEY, CONF_LATITUDE, CONF_LONGITUDE
from homeassistant.const import CONF_API_KEY, CONF_LATITUDE, CONF_LONGITUDE, CONF_NAME
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.aiohttp_client import async_get_clientsession
@@ -55,11 +55,8 @@ class AccuWeatherFlowHandler(ConfigFlow, domain=DOMAIN):
)
self._abort_if_unique_id_configured()
if TYPE_CHECKING:
assert accuweather.location_name is not None
return self.async_create_entry(
title=accuweather.location_name, data=user_input
title=user_input[CONF_NAME], data=user_input
)
return self.async_show_form(
@@ -73,6 +70,9 @@ class AccuWeatherFlowHandler(ConfigFlow, domain=DOMAIN):
vol.Optional(
CONF_LONGITUDE, default=self.hass.config.longitude
): cv.longitude,
vol.Optional(
CONF_NAME, default=self.hass.config.location_name
): str,
}
),
errors=errors,
@@ -64,7 +64,7 @@ class AccuWeatherObservationDataUpdateCoordinator(
"""Initialize."""
self.accuweather = accuweather
self.location_key = accuweather.location_key
name = config_entry.data.get(CONF_NAME) or config_entry.title
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING:
assert self.location_key is not None
@@ -122,7 +122,7 @@ class AccuWeatherForecastDataUpdateCoordinator(
self.accuweather = accuweather
self.location_key = accuweather.location_key
self._fetch_method = fetch_method
name = config_entry.data.get(CONF_NAME) or config_entry.title
name = config_entry.data[CONF_NAME]
if TYPE_CHECKING:
assert self.location_key is not None
@@ -25,7 +25,8 @@
"data": {
"api_key": "[%key:common::config_flow::data::api_key%]",
"latitude": "[%key:common::config_flow::data::latitude%]",
"longitude": "[%key:common::config_flow::data::longitude%]"
"longitude": "[%key:common::config_flow::data::longitude%]",
"name": "[%key:common::config_flow::data::name%]"
},
"data_description": {
"api_key": "API key generated in the AccuWeather APIs portal."
+1 -24
View File
@@ -38,7 +38,6 @@ HVAC_MODE_MAPPING_ACTRONAIR_TO_HA = {
"HEAT": HVACMode.HEAT,
"FAN": HVACMode.FAN_ONLY,
"AUTO": HVACMode.AUTO,
"DRY": HVACMode.DRY,
"OFF": HVACMode.OFF,
}
HVAC_MODE_MAPPING_HA_TO_ACTRONAIR = {
@@ -80,6 +79,7 @@ class ActronAirClimateEntity(ClimateEntity):
)
_attr_name = None
_attr_fan_modes = list(FAN_MODE_MAPPING_ACTRONAIR_TO_HA.values())
_attr_hvac_modes = list(HVAC_MODE_MAPPING_ACTRONAIR_TO_HA.values())
class ActronSystemClimate(ActronAirAcEntity, ActronAirClimateEntity):
@@ -93,17 +93,6 @@ class ActronSystemClimate(ActronAirAcEntity, ActronAirClimateEntity):
super().__init__(coordinator)
self._attr_unique_id = self._serial_number
@property
def hvac_modes(self) -> list[HVACMode]:
"""Return the list of supported HVAC modes."""
modes = [
HVAC_MODE_MAPPING_ACTRONAIR_TO_HA[mode]
for mode in self._status.user_aircon_settings.supported_modes
if mode in HVAC_MODE_MAPPING_ACTRONAIR_TO_HA
]
modes.append(HVACMode.OFF)
return modes
@property
def min_temp(self) -> float:
"""Return the minimum temperature that can be set."""
@@ -190,18 +179,6 @@ class ActronZoneClimate(ActronAirZoneEntity, ActronAirClimateEntity):
super().__init__(coordinator, zone)
self._attr_unique_id: str = self._zone_identifier
@property
def hvac_modes(self) -> list[HVACMode]:
"""Return the list of supported HVAC modes."""
status = self.coordinator.data
modes = [
HVAC_MODE_MAPPING_ACTRONAIR_TO_HA[mode]
for mode in status.user_aircon_settings.supported_modes
if mode in HVAC_MODE_MAPPING_ACTRONAIR_TO_HA
]
modes.append(HVACMode.OFF)
return modes
@property
def min_temp(self) -> float:
"""Return the minimum temperature that can be set."""
@@ -13,5 +13,5 @@
"integration_type": "hub",
"iot_class": "cloud_polling",
"quality_scale": "silver",
"requirements": ["actron-neo-api==0.5.6"]
"requirements": ["actron-neo-api==0.5.3"]
}
@@ -25,7 +25,7 @@ async def async_get_media_source(hass: HomeAssistant) -> MediaSource:
hass.data[DATA_MEDIA_SOURCE] = source = local_source.LocalSource(
hass,
DOMAIN,
"AI generated images",
"AI Generated Images",
{IMAGE_DIR: str(media_dir)},
f"/{DOMAIN}",
)
@@ -4,8 +4,11 @@
required: true
default: any
selector:
automation_behavior:
mode: condition
select:
translation_key: condition_behavior
options:
- all
- any
# --- Unit lists for multi-unit pollutants ---
@@ -237,6 +237,21 @@
"name": "Volatile organic compounds value"
}
},
"selector": {
"condition_behavior": {
"options": {
"all": "All",
"any": "Any"
}
},
"trigger_behavior": {
"options": {
"any": "Any",
"first": "First",
"last": "Last"
}
}
},
"title": "Air Quality",
"triggers": {
"co2_changed": {
@@ -3,8 +3,12 @@
required: true
default: any
selector:
automation_behavior:
mode: trigger
select:
translation_key: trigger_behavior
options:
- first
- last
- any
for: &trigger_for
required: true
default: 00:00:00
+12 -9
View File
@@ -12,11 +12,11 @@ from airly.exceptions import AirlyError
import voluptuous as vol
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.const import CONF_API_KEY, CONF_LATITUDE, CONF_LONGITUDE
from homeassistant.const import CONF_API_KEY, CONF_LATITUDE, CONF_LONGITUDE, CONF_NAME
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from .const import CONF_USE_NEAREST, DEFAULT_NAME, DOMAIN, NO_AIRLY_SENSORS
from .const import CONF_USE_NEAREST, DOMAIN, NO_AIRLY_SENSORS
DESCRIPTION_PLACEHOLDERS = {
"developer_registration_url": "https://developer.airly.eu/register",
@@ -45,16 +45,16 @@ class AirlyFlowHandler(ConfigFlow, domain=DOMAIN):
try:
location_point_valid = await check_location(
websession,
user_input[CONF_API_KEY],
user_input[CONF_LATITUDE],
user_input[CONF_LONGITUDE],
user_input["api_key"],
user_input["latitude"],
user_input["longitude"],
)
if not location_point_valid:
location_nearest_valid = await check_location(
websession,
user_input[CONF_API_KEY],
user_input[CONF_LATITUDE],
user_input[CONF_LONGITUDE],
user_input["api_key"],
user_input["latitude"],
user_input["longitude"],
use_nearest=True,
)
except AirlyError as err:
@@ -68,7 +68,7 @@ class AirlyFlowHandler(ConfigFlow, domain=DOMAIN):
return self.async_abort(reason="wrong_location")
use_nearest = True
return self.async_create_entry(
title=DEFAULT_NAME,
title=user_input[CONF_NAME],
data={**user_input, CONF_USE_NEAREST: use_nearest},
)
@@ -83,6 +83,9 @@ class AirlyFlowHandler(ConfigFlow, domain=DOMAIN):
vol.Optional(
CONF_LONGITUDE, default=self.hass.config.longitude
): cv.longitude,
vol.Optional(
CONF_NAME, default=self.hass.config.location_name
): str,
}
),
errors=errors,
-2
View File
@@ -37,5 +37,3 @@ MAX_UPDATE_INTERVAL: Final = 90
MIN_UPDATE_INTERVAL: Final = 5
NO_AIRLY_SENSORS: Final = "There are no Airly sensors in this area yet."
URL = "https://airly.org/map/#{latitude},{longitude}"
DEFAULT_NAME: Final = "Airly"
+2 -2
View File
@@ -127,7 +127,7 @@ SENSOR_TYPES: tuple[AirlySensorEntityDescription, ...] = (
),
AirlySensorEntityDescription(
key=ATTR_API_CO,
device_class=SensorDeviceClass.CO,
translation_key="co",
native_unit_of_measurement=CONCENTRATION_MICROGRAMS_PER_CUBIC_METER,
state_class=SensorStateClass.MEASUREMENT,
suggested_display_precision=0,
@@ -178,7 +178,7 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up Airly sensor entities based on a config entry."""
name = entry.data.get(CONF_NAME) or entry.title
name = entry.data[CONF_NAME]
coordinator = entry.runtime_data
+5 -1
View File
@@ -13,7 +13,8 @@
"data": {
"api_key": "[%key:common::config_flow::data::api_key%]",
"latitude": "[%key:common::config_flow::data::latitude%]",
"longitude": "[%key:common::config_flow::data::longitude%]"
"longitude": "[%key:common::config_flow::data::longitude%]",
"name": "[%key:common::config_flow::data::name%]"
},
"description": "To generate API key go to {developer_registration_url}"
}
@@ -23,6 +24,9 @@
"sensor": {
"caqi": {
"name": "Common air quality index"
},
"co": {
"name": "[%key:component::sensor::entity_component::carbon_monoxide::name%]"
}
}
},
@@ -36,8 +36,6 @@ class AirTouch5ConfigFlow(ConfigFlow, domain=DOMAIN):
_LOGGER.exception("Unexpected exception")
errors = {"base": "cannot_connect"}
else:
# Uses the host/IP value from CONF_HOST as unique ID, which is no longer allowed
# pylint: disable-next=hass-unique-id-ip-based
await self.async_set_unique_id(user_input[CONF_HOST])
self._abort_if_unique_id_configured()
return self.async_create_entry(
@@ -7,8 +7,11 @@
required: true
default: any
selector:
automation_behavior:
mode: condition
select:
translation_key: condition_behavior
options:
- all
- any
.condition_common_for: &condition_common_for
target: *condition_common_target
@@ -160,6 +160,21 @@
"message": "Arming requires a code but none was given for {entity_id}."
}
},
"selector": {
"condition_behavior": {
"options": {
"all": "All",
"any": "Any"
}
},
"trigger_behavior": {
"options": {
"any": "Any",
"first": "First",
"last": "Last"
}
}
},
"services": {
"alarm_arm_away": {
"description": "Arms an alarm in the away mode.",
@@ -7,8 +7,12 @@
required: true
default: any
selector:
automation_behavior:
mode: trigger
select:
options:
- first
- last
- any
translation_key: trigger_behavior
for:
required: true
default: 00:00:00
@@ -11,7 +11,6 @@ from .services import async_setup_services
PLATFORMS = [
Platform.BINARY_SENSOR,
Platform.BUTTON,
Platform.NOTIFY,
Platform.SENSOR,
Platform.SWITCH,
@@ -1,55 +0,0 @@
"""Support for buttons."""
from homeassistant.components.button import ButtonEntity
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity import EntityDescription
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from homeassistant.util import slugify
from .coordinator import AmazonConfigEntry, AmazonDevicesCoordinator
from .entity import AmazonServiceEntity
# Coordinator is used to centralize the data updates
PARALLEL_UPDATES = 0
async def async_setup_entry(
hass: HomeAssistant,
entry: AmazonConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up button entities for Alexa Devices."""
coordinator = entry.runtime_data
known_routines: set[str] = set()
def _check_routines() -> None:
current_routines = set(coordinator.api.routines)
new_routines = current_routines - known_routines
if new_routines:
known_routines.update(new_routines)
async_add_entities(
AmazonRoutineButton(coordinator, routine) for routine in new_routines
)
_check_routines()
entry.async_on_unload(coordinator.async_add_listener(_check_routines))
class AmazonRoutineButton(AmazonServiceEntity, ButtonEntity):
"""Button entity for Alexa routine."""
_attr_has_entity_name = True
def __init__(self, coordinator: AmazonDevicesCoordinator, routine: str) -> None:
"""Initialize the routine button entity."""
self._coordinator = coordinator
self._routine = routine
super().__init__(
coordinator,
EntityDescription(key=slugify(routine), name=routine),
)
async def async_press(self) -> None:
"""Handle button press action."""
await self._coordinator.api.call_routine(self._routine)
@@ -12,13 +12,12 @@ from aioamazondevices.structures import AmazonDevice
from aiohttp import ClientSession
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import CONF_PASSWORD, CONF_USERNAME, Platform
from homeassistant.const import CONF_PASSWORD, CONF_USERNAME
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers import device_registry as dr, entity_registry as er
from homeassistant.helpers import device_registry as dr
from homeassistant.helpers.debounce import Debouncer
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from homeassistant.util import slugify
from .const import _LOGGER, CONF_LOGIN_DATA, DOMAIN
@@ -65,13 +64,6 @@ class AmazonDevicesCoordinator(DataUpdateCoordinator[dict[str, AmazonDevice]]):
for identifier_domain, identifier in device.identifiers
if identifier_domain == DOMAIN
}
self.previous_routines: set[str] = {
routine.unique_id
for routine in er.async_entries_for_config_entry(
er.async_get(hass), entry.entry_id
)
if routine.domain == Platform.BUTTON
}
async def _async_update_data(self) -> dict[str, AmazonDevice]:
"""Update device data."""
@@ -100,13 +92,8 @@ class AmazonDevicesCoordinator(DataUpdateCoordinator[dict[str, AmazonDevice]]):
current_devices = set(data.keys())
if stale_devices := self.previous_devices - current_devices:
await self._async_remove_device_stale(stale_devices)
self.previous_devices = current_devices
current_routines = {slugify(routine) for routine in self.api.routines}
if stale_routines := self.previous_routines - current_routines:
await self._async_remove_routine_stale(stale_routines)
self.previous_routines = current_routines
return data
async def _async_remove_device_stale(
@@ -129,23 +116,3 @@ class AmazonDevicesCoordinator(DataUpdateCoordinator[dict[str, AmazonDevice]]):
device_id=device.id,
remove_config_entry_id=self.config_entry.entry_id,
)
async def _async_remove_routine_stale(
self,
stale_routines: set[str],
) -> None:
"""Remove stale routine."""
entity_registry = er.async_get(self.hass)
for routine in stale_routines:
_LOGGER.debug(
"Detected change in routines: routine %s removed",
routine,
)
entity_id = entity_registry.async_get_entity_id(
Platform.BUTTON,
DOMAIN,
f"{slugify(self.config_entry.unique_id)}-{slugify(routine)}",
)
if entity_id:
entity_registry.async_remove(entity_id)
@@ -2,10 +2,9 @@
from aioamazondevices.structures import AmazonDevice
from homeassistant.helpers.device_registry import DeviceEntryType, DeviceInfo
from homeassistant.helpers.device_registry import DeviceInfo
from homeassistant.helpers.entity import EntityDescription
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from homeassistant.util import slugify
from .const import DOMAIN
from .coordinator import AmazonDevicesCoordinator
@@ -51,32 +50,3 @@ class AmazonEntity(CoordinatorEntity[AmazonDevicesCoordinator]):
and self._serial_num in self.coordinator.data
and self.device.online
)
class AmazonServiceEntity(CoordinatorEntity[AmazonDevicesCoordinator]):
"""Defines Alexa Devices entity for service device."""
_attr_has_entity_name = True
def __init__(
self,
coordinator: AmazonDevicesCoordinator,
description: EntityDescription,
) -> None:
"""Initialize the service entity."""
super().__init__(coordinator)
self._attr_device_info = DeviceInfo(
identifiers={(DOMAIN, service_device_id(coordinator))},
manufacturer="Amazon",
entry_type=DeviceEntryType.SERVICE,
)
self.entity_description = description
self._attr_unique_id = (
f"{slugify(coordinator.config_entry.unique_id)}-{description.key}"
)
def service_device_id(coordinator: AmazonDevicesCoordinator) -> str:
"""Return service device id."""
return slugify(f"{coordinator.config_entry.unique_id}_service_device")
+2 -1
View File
@@ -39,6 +39,7 @@ from homeassistant.helpers.typing import ConfigType
from .binary_sensor import BINARY_SENSOR_KEYS, BINARY_SENSORS, check_binary_sensors
from .camera import STREAM_SOURCE_LIST
from .const import (
CAMERAS,
COMM_RETRIES,
COMM_TIMEOUT,
DATA_AMCREST,
@@ -358,7 +359,7 @@ def _start_event_monitor(
async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
"""Set up the Amcrest IP Camera component."""
hass.data.setdefault(DATA_AMCREST, {DEVICES: {}})
hass.data.setdefault(DATA_AMCREST, {DEVICES: {}, CAMERAS: []})
for device in config[DOMAIN]:
name: str = device[CONF_NAME]
+74 -10
View File
@@ -12,11 +12,13 @@ import aiohttp
from aiohttp import web
from amcrest import AmcrestError
from haffmpeg.camera import CameraMjpeg
import voluptuous as vol
from homeassistant.components.camera import Camera, CameraEntityFeature
from homeassistant.components.ffmpeg import FFmpegManager, get_ffmpeg_manager
from homeassistant.const import CONF_NAME, STATE_OFF, STATE_ON
from homeassistant.const import ATTR_ENTITY_ID, CONF_NAME, STATE_OFF, STATE_ON
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.aiohttp_client import (
async_aiohttp_proxy_stream,
async_aiohttp_proxy_web,
@@ -27,13 +29,11 @@ from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.typing import ConfigType, DiscoveryInfoType
from .const import (
ATTR_COLOR_BW,
CAMERA_WEB_SESSION_TIMEOUT,
CBW,
CAMERAS,
COMM_TIMEOUT,
DATA_AMCREST,
DEVICES,
MOV,
RESOLUTION_TO_STREAM,
SERVICE_UPDATE,
SNAPSHOT_TIMEOUT,
@@ -49,11 +49,65 @@ SCAN_INTERVAL = timedelta(seconds=15)
STREAM_SOURCE_LIST = ["snapshot", "mjpeg", "rtsp"]
_ATTR_PTZ_TT = "travel_time"
_ATTR_PTZ_MOV = "movement"
_MOV = [
"zoom_out",
"zoom_in",
"right",
"left",
"up",
"down",
"right_down",
"right_up",
"left_down",
"left_up",
]
_ZOOM_ACTIONS = ["ZoomWide", "ZoomTele"]
_MOVE_1_ACTIONS = ["Right", "Left", "Up", "Down"]
_MOVE_2_ACTIONS = ["RightDown", "RightUp", "LeftDown", "LeftUp"]
_ACTION = _ZOOM_ACTIONS + _MOVE_1_ACTIONS + _MOVE_2_ACTIONS
_DEFAULT_TT = 0.2
_ATTR_PRESET = "preset"
_ATTR_COLOR_BW = "color_bw"
_CBW_COLOR = "color"
_CBW_AUTO = "auto"
_CBW_BW = "bw"
_CBW = [_CBW_COLOR, _CBW_AUTO, _CBW_BW]
_SRV_SCHEMA = vol.Schema({vol.Optional(ATTR_ENTITY_ID): cv.comp_entity_ids})
_SRV_GOTO_SCHEMA = _SRV_SCHEMA.extend(
{vol.Required(_ATTR_PRESET): vol.All(vol.Coerce(int), vol.Range(min=1))}
)
_SRV_CBW_SCHEMA = _SRV_SCHEMA.extend({vol.Required(_ATTR_COLOR_BW): vol.In(_CBW)})
_SRV_PTZ_SCHEMA = _SRV_SCHEMA.extend(
{
vol.Required(_ATTR_PTZ_MOV): vol.In(_MOV),
vol.Optional(_ATTR_PTZ_TT, default=_DEFAULT_TT): cv.small_float,
}
)
CAMERA_SERVICES = {
"enable_recording": (_SRV_SCHEMA, "async_enable_recording", ()),
"disable_recording": (_SRV_SCHEMA, "async_disable_recording", ()),
"enable_audio": (_SRV_SCHEMA, "async_enable_audio", ()),
"disable_audio": (_SRV_SCHEMA, "async_disable_audio", ()),
"enable_motion_recording": (_SRV_SCHEMA, "async_enable_motion_recording", ()),
"disable_motion_recording": (_SRV_SCHEMA, "async_disable_motion_recording", ()),
"goto_preset": (_SRV_GOTO_SCHEMA, "async_goto_preset", (_ATTR_PRESET,)),
"set_color_bw": (_SRV_CBW_SCHEMA, "async_set_color_bw", (_ATTR_COLOR_BW,)),
"start_tour": (_SRV_SCHEMA, "async_start_tour", ()),
"stop_tour": (_SRV_SCHEMA, "async_stop_tour", ()),
"ptz_control": (
_SRV_PTZ_SCHEMA,
"async_ptz_control",
(_ATTR_PTZ_MOV, _ATTR_PTZ_TT),
),
}
_BOOL_TO_STATE = {True: STATE_ON, False: STATE_OFF}
@@ -221,7 +275,7 @@ class AmcrestCam(Camera):
self._motion_recording_enabled
)
if self._color_bw is not None:
attr[ATTR_COLOR_BW] = self._color_bw
attr[_ATTR_COLOR_BW] = self._color_bw
return attr
@property
@@ -268,7 +322,15 @@ class AmcrestCam(Camera):
self.async_schedule_update_ha_state(True)
async def async_added_to_hass(self) -> None:
"""Subscribe to signals."""
"""Subscribe to signals and add camera to list."""
self._unsub_dispatcher.extend(
async_dispatcher_connect(
self.hass,
service_signal(service, self.entity_id),
getattr(self, callback_name),
)
for service, (_, callback_name, _) in CAMERA_SERVICES.items()
)
self._unsub_dispatcher.append(
async_dispatcher_connect(
self.hass,
@@ -276,9 +338,11 @@ class AmcrestCam(Camera):
self.async_on_demand_update,
)
)
self.hass.data[DATA_AMCREST][CAMERAS].append(self.entity_id)
async def async_will_remove_from_hass(self) -> None:
"""Disconnect from signals."""
"""Remove camera from list and disconnect from signals."""
self.hass.data[DATA_AMCREST][CAMERAS].remove(self.entity_id)
for unsub_dispatcher in self._unsub_dispatcher:
unsub_dispatcher()
@@ -392,7 +456,7 @@ class AmcrestCam(Camera):
async def async_ptz_control(self, movement: str, travel_time: float) -> None:
"""Move or zoom camera in specified direction."""
code = _ACTION[MOV.index(movement)]
code = _ACTION[_MOV.index(movement)]
kwargs = {"code": code, "arg1": 0, "arg2": 0, "arg3": 0}
if code in _MOVE_1_ACTIONS:
@@ -549,10 +613,10 @@ class AmcrestCam(Camera):
)
async def _async_get_color_mode(self) -> str:
return CBW[await self._api.async_day_night_color]
return _CBW[await self._api.async_day_night_color]
async def _async_set_color_mode(self, cbw: str) -> None:
await self._api.async_set_day_night_color(CBW.index(cbw), channel=0)
await self._api.async_set_day_night_color(_CBW.index(cbw), channel=0)
async def _async_set_color_bw(self, cbw: str) -> None:
"""Set camera color mode."""
+1 -15
View File
@@ -2,6 +2,7 @@
DOMAIN = "amcrest"
DATA_AMCREST = DOMAIN
CAMERAS = "cameras"
DEVICES = "devices"
BINARY_SENSOR_SCAN_INTERVAL_SECS = 5
@@ -16,18 +17,3 @@ SERVICE_UPDATE = "update"
RESOLUTION_LIST = {"high": 0, "low": 1}
RESOLUTION_TO_STREAM = {0: "Main", 1: "Extra"}
ATTR_COLOR_BW = "color_bw"
CBW = ["color", "auto", "bw"]
MOV = [
"zoom_out",
"zoom_in",
"right",
"left",
"up",
"down",
"right_down",
"right_up",
"left_down",
"left_up",
]
+52 -57
View File
@@ -1,67 +1,62 @@
"""Services for Amcrest IP cameras."""
"""Support for Amcrest IP cameras."""
from __future__ import annotations
import voluptuous as vol
from homeassistant.auth.models import User
from homeassistant.auth.permissions.const import POLICY_CONTROL
from homeassistant.const import ATTR_ENTITY_ID, ENTITY_MATCH_ALL, ENTITY_MATCH_NONE
from homeassistant.core import HomeAssistant, ServiceCall, callback
from homeassistant.exceptions import Unauthorized, UnknownUser
from homeassistant.helpers.dispatcher import async_dispatcher_send
from homeassistant.helpers.service import async_extract_entity_ids
from homeassistant.components.camera import DOMAIN as CAMERA_DOMAIN
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import config_validation as cv, service
from .const import ATTR_COLOR_BW, CBW, DOMAIN, MOV
_ATTR_PRESET = "preset"
_ATTR_PTZ_MOV = "movement"
_ATTR_PTZ_TT = "travel_time"
_DEFAULT_TT = 0.2
from .camera import CAMERA_SERVICES
from .const import CAMERAS, DATA_AMCREST, DOMAIN
from .helpers import service_signal
@callback
def async_setup_services(hass: HomeAssistant) -> None:
"""Set up the Amcrest IP Camera services."""
for service_name, func in (
("enable_recording", "async_enable_recording"),
("disable_recording", "async_disable_recording"),
("enable_audio", "async_enable_audio"),
("disable_audio", "async_disable_audio"),
("enable_motion_recording", "async_enable_motion_recording"),
("disable_motion_recording", "async_disable_motion_recording"),
("start_tour", "async_start_tour"),
("stop_tour", "async_stop_tour"),
):
service.async_register_platform_entity_service(
hass,
DOMAIN,
service_name,
entity_domain=CAMERA_DOMAIN,
schema=None,
func=func,
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
"goto_preset",
entity_domain=CAMERA_DOMAIN,
schema={vol.Required(_ATTR_PRESET): vol.All(vol.Coerce(int), vol.Range(min=1))},
func="async_goto_preset",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
"set_color_bw",
entity_domain=CAMERA_DOMAIN,
schema={vol.Required(ATTR_COLOR_BW): vol.In(CBW)},
func="async_set_color_bw",
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
"ptz_control",
entity_domain=CAMERA_DOMAIN,
schema={
vol.Required(_ATTR_PTZ_MOV): vol.In(MOV),
vol.Optional(_ATTR_PTZ_TT, default=_DEFAULT_TT): cv.small_float,
},
func="async_ptz_control",
)
def have_permission(user: User | None, entity_id: str) -> bool:
return not user or user.permissions.check_entity(entity_id, POLICY_CONTROL)
async def async_extract_from_service(call: ServiceCall) -> list[str]:
if call.context.user_id:
user = await hass.auth.async_get_user(call.context.user_id)
if user is None:
raise UnknownUser(context=call.context)
else:
user = None
if call.data.get(ATTR_ENTITY_ID) == ENTITY_MATCH_ALL:
# Return all entity_ids user has permission to control.
return [
entity_id
for entity_id in hass.data[DATA_AMCREST][CAMERAS]
if have_permission(user, entity_id)
]
if call.data.get(ATTR_ENTITY_ID) == ENTITY_MATCH_NONE:
return []
call_ids = await async_extract_entity_ids(call)
entity_ids = []
for entity_id in hass.data[DATA_AMCREST][CAMERAS]:
if entity_id not in call_ids:
continue
if not have_permission(user, entity_id):
raise Unauthorized(
context=call.context, entity_id=entity_id, permission=POLICY_CONTROL
)
entity_ids.append(entity_id)
return entity_ids
async def async_service_handler(call: ServiceCall) -> None:
args = [call.data[arg] for arg in CAMERA_SERVICES[call.service][2]]
for entity_id in await async_extract_from_service(call):
async_dispatcher_send(hass, service_signal(call.service, entity_id), *args)
for service, params in CAMERA_SERVICES.items():
hass.services.async_register(DOMAIN, service, async_service_handler, params[0])
@@ -76,7 +76,6 @@ from .const import (
ATTR_HEALTHY,
ATTR_INTEGRATION_COUNT,
ATTR_INTEGRATIONS,
ATTR_ISSUE_TRACKER,
ATTR_OPERATING_SYSTEM,
ATTR_PROTECTED,
ATTR_RECORDER,
@@ -415,7 +414,6 @@ class Analytics:
custom_integrations.append(
{
ATTR_DOMAIN: integration.domain,
ATTR_ISSUE_TRACKER: integration.issue_tracker,
ATTR_VERSION: integration.version,
}
)
@@ -36,7 +36,6 @@ ATTR_HEALTHY = "healthy"
ATTR_INSTALLATION_TYPE = "installation_type"
ATTR_INTEGRATION_COUNT = "integration_count"
ATTR_INTEGRATIONS = "integrations"
ATTR_ISSUE_TRACKER = "issue_tracker"
ATTR_ONBOARDED = "onboarded"
ATTR_OPERATING_SYSTEM = "operating_system"
ATTR_PREFERENCES = "preferences"
+228 -411
View File
@@ -1,8 +1,7 @@
"""Base entity for Anthropic."""
import base64
from collections import deque
from collections.abc import AsyncIterator, Callable, Iterable
from collections.abc import AsyncGenerator, Callable, Iterable
from dataclasses import dataclass, field
from datetime import UTC, datetime
import json
@@ -21,22 +20,18 @@ from anthropic.types import (
CitationWebSearchResultLocationParam,
CodeExecutionTool20250825Param,
CodeExecutionToolResultBlock,
CodeExecutionToolResultBlockContent,
CodeExecutionToolResultBlockParamContentParam,
Container,
ContentBlock,
ContentBlockParam,
DocumentBlockParam,
ImageBlockParam,
InputJSONDelta,
JSONOutputFormatParam,
Message,
MessageDeltaUsage,
MessageParam,
MessageStreamEvent,
ModelInfo,
OutputConfigParam,
RawContentBlockDelta,
RawContentBlockDeltaEvent,
RawContentBlockStartEvent,
RawContentBlockStopEvent,
@@ -73,30 +68,18 @@ from anthropic.types import (
WebSearchTool20250305Param,
WebSearchTool20260209Param,
WebSearchToolResultBlock,
WebSearchToolResultBlockContent,
WebSearchToolResultBlockParamContentParam,
)
from anthropic.types.bash_code_execution_tool_result_block import (
Content as BashCodeExecutionToolResultBlockContent,
)
from anthropic.types.bash_code_execution_tool_result_block_param import (
Content as BashCodeExecutionToolResultBlockParamContentParam,
)
from anthropic.types.message_create_params import MessageCreateParamsStreaming
from anthropic.types.raw_message_delta_event import Delta
from anthropic.types.text_editor_code_execution_tool_result_block import (
Content as TextEditorCodeExecutionToolResultBlockContent,
)
from anthropic.types.text_editor_code_execution_tool_result_block_param import (
Content as TextEditorCodeExecutionToolResultBlockParamContentParam,
)
from anthropic.types.tool_search_tool_result_block import (
Content as ToolSearchToolResultBlockContent,
)
from anthropic.types.tool_search_tool_result_block_param import (
Content as ToolSearchToolResultBlockParamContentParam,
)
from anthropic.types.tool_use_block import Caller
import voluptuous as vol
from voluptuous_openapi import convert
@@ -108,7 +91,7 @@ from homeassistant.helpers import device_registry as dr, llm
from homeassistant.helpers.json import json_dumps
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from homeassistant.util import slugify
from homeassistant.util.json import JsonArrayType, JsonObjectType
from homeassistant.util.json import JsonObjectType
from .const import (
CONF_CHAT_MODEL,
@@ -462,7 +445,13 @@ def _convert_content( # noqa: C901
return messages, container_id
class AnthropicDeltaStream:
async def _transform_stream( # noqa: C901 - This is complex, but better to have it in one place
chat_log: conversation.ChatLog,
stream: AsyncStream[MessageStreamEvent],
output_tool: str | None = None,
) -> AsyncGenerator[
conversation.AssistantContentDeltaDict | conversation.ToolResultContentDeltaDict
]:
"""Transform the response stream into HA format.
A typical stream of responses might look something like the following:
@@ -492,376 +481,201 @@ class AnthropicDeltaStream:
Each message could contain multiple blocks of the same type.
"""
def __init__(
self,
chat_log: conversation.ChatLog,
stream: AsyncStream[MessageStreamEvent],
output_tool: str | None = None,
) -> None:
"""Initialize the delta stream."""
self._chat_log: conversation.ChatLog = chat_log
self._stream: AsyncStream[MessageStreamEvent] = stream
self._output_tool: str | None = output_tool
self._buffer: deque[
conversation.AssistantContentDeltaDict
| conversation.ToolResultContentDeltaDict
] = deque()
self._stream_iterator: AsyncIterator[MessageStreamEvent] | None = None
self._current_tool_block: ToolUseBlockParam | ServerToolUseBlockParam | None = (
None
if stream is None or not hasattr(stream, "__aiter__"):
raise HomeAssistantError(
translation_domain=DOMAIN, translation_key="unexpected_stream_object"
)
self._current_tool_args: str = ""
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
self._input_usage: Usage | None = None
self._first_block: bool = True
def __aiter__(
self,
) -> AsyncIterator[
conversation.AssistantContentDeltaDict | conversation.ToolResultContentDeltaDict
]:
"""Initialize the stream and return the async iterator."""
if self._stream is None or not hasattr(self._stream, "__aiter__"):
raise HomeAssistantError(
translation_domain=DOMAIN, translation_key="unexpected_stream_object"
)
if self._stream_iterator is None:
self._stream_iterator = self._stream.__aiter__()
return self
current_tool_block: ToolUseBlockParam | ServerToolUseBlockParam | None = None
current_tool_args: str
content_details = ContentDetails()
content_details.add_citation_detail()
input_usage: Usage | None = None
first_block: bool = True
async def __anext__(
self,
) -> (
conversation.AssistantContentDeltaDict | conversation.ToolResultContentDeltaDict
):
"""Get the next item from the stream."""
while True:
if self._buffer:
return self._buffer.popleft()
async for response in stream:
LOGGER.debug("Received response: %s", response)
response = await self._stream_iterator.__anext__() # type: ignore[union-attr]
LOGGER.debug("Received response: %s", response)
self.on_message_stream_event(response)
def on_message_stream_event(self, event: MessageStreamEvent) -> None:
"""Handle MessageStreamEvent."""
if isinstance(event, RawMessageStartEvent):
self.on_message_start_event(event.message)
return
if isinstance(event, RawContentBlockStartEvent):
self.on_content_block_start_event(event.content_block, event.index)
return
if isinstance(event, RawContentBlockDeltaEvent):
self.on_content_block_delta_event(event.delta)
return
if isinstance(event, RawContentBlockStopEvent):
self.on_content_block_stop_event(event.index)
return
if isinstance(event, RawMessageDeltaEvent):
self.on_message_delta_event(event.delta, event.usage)
return
if isinstance(event, RawMessageStopEvent):
self.on_message_stop_event()
return
LOGGER.debug("Unhandled event type: %s", event.type) # type: ignore[unreachable] # pragma: no cover - All types are handled but we want to verify that
def on_message_start_event(self, message: Message) -> None:
"""Handle RawMessageStartEvent."""
self._input_usage = message.usage
self._first_block = True
def on_content_block_start_event(
self, content_block: ContentBlock, index: int
) -> None:
"""Handle RawContentBlockStartEvent."""
if isinstance(content_block, ToolUseBlock):
self.on_tool_use_block(
content_block.id,
content_block.input,
content_block.name,
content_block.caller,
)
return
if isinstance(content_block, TextBlock):
self.on_text_block(content_block.text, content_block.citations)
return
if isinstance(content_block, ThinkingBlock):
self.on_thinking_block(content_block.thinking, content_block.signature)
return
if isinstance(content_block, RedactedThinkingBlock):
self.on_redacted_thinking_block(content_block.data)
return
if isinstance(content_block, ServerToolUseBlock):
self.on_server_tool_use_block(
content_block.id,
content_block.name,
content_block.input,
content_block.caller,
)
return
if isinstance(
content_block,
(
WebSearchToolResultBlock,
CodeExecutionToolResultBlock,
BashCodeExecutionToolResultBlock,
TextEditorCodeExecutionToolResultBlock,
ToolSearchToolResultBlock,
),
):
self.on_server_tool_result_block(
content_block.tool_use_id,
content_block.type,
content_block.content,
content_block.caller if hasattr(content_block, "caller") else None,
)
return
LOGGER.debug("Unhandled content block type: %s", content_block.type)
def on_tool_use_block(
self, id: str, input: dict[str, Any], name: str, caller: Caller | None
) -> None:
"""Handle ToolUseBlock."""
self._current_tool_block = ToolUseBlockParam(
type="tool_use",
id=id,
name=name,
input=input,
)
self._current_tool_args = ""
if name == self._output_tool:
if self._first_block or self._content_details.has_content():
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
self._buffer.append({"role": "assistant"})
self._first_block = False
def on_text_block(self, text: str, citations: list[TextCitation] | None) -> None:
"""Handle TextBlock."""
if ( # Do not start a new assistant content just for citations, concatenate consecutive blocks with citations instead.
self._first_block
or (
not self._content_details.has_citations()
and citations is None
and self._content_details.has_content()
)
):
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._buffer.append({"role": "assistant"})
self._first_block = False
self._content_details.add_citation_detail()
if text:
self._content_details.citation_details[-1].length += len(text)
self._buffer.append({"content": text})
def on_thinking_block(self, thinking: str, signature: str) -> None:
"""Handle ThinkingBlock."""
if self._first_block or self._content_details.thinking_signature:
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
self._buffer.append({"role": "assistant"})
self._first_block = False
def on_redacted_thinking_block(self, data: str) -> None:
"""Handle RedactedThinkingBlock."""
LOGGER.debug(
"Some of Claudes internal reasoning has been automatically "
"encrypted for safety reasons. This doesnt affect the quality of "
"responses"
)
if self._first_block or self._content_details.redacted_thinking:
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
self._buffer.append({"role": "assistant"})
self._first_block = False
self._content_details.redacted_thinking = data
def on_server_tool_use_block(
self,
id: str,
name: Literal[
"web_search",
"web_fetch",
"code_execution",
"bash_code_execution",
"text_editor_code_execution",
"tool_search_tool_regex",
"tool_search_tool_bm25",
],
input: dict[str, Any],
caller: Caller | None,
) -> None:
"""Handle ServerToolUseBlock."""
self._current_tool_block = ServerToolUseBlockParam(
type="server_tool_use",
id=id,
name=name,
input=input,
)
self._current_tool_args = ""
def on_server_tool_result_block(
self,
tool_use_id: str,
tool_name: Literal[
"web_search_tool_result",
"code_execution_tool_result",
"bash_code_execution_tool_result",
"text_editor_code_execution_tool_result",
"tool_search_tool_result",
],
content: WebSearchToolResultBlockContent
| CodeExecutionToolResultBlockContent
| BashCodeExecutionToolResultBlockContent
| TextEditorCodeExecutionToolResultBlockContent
| ToolSearchToolResultBlockContent,
caller: Caller | None,
) -> None:
"""Handle various server tool result blocks."""
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
self._buffer.append(
{
"role": "tool_result",
"tool_call_id": tool_use_id,
"tool_name": tool_name.removesuffix("_tool_result"),
"tool_result": {
"content": cast(JsonArrayType, [x.to_dict() for x in content])
if isinstance(response, RawMessageStartEvent):
input_usage = response.message.usage
first_block = True
elif isinstance(response, RawContentBlockStartEvent):
if isinstance(response.content_block, ToolUseBlock):
current_tool_block = ToolUseBlockParam(
type="tool_use",
id=response.content_block.id,
name=response.content_block.name,
input=response.content_block.input or {},
)
current_tool_args = ""
if response.content_block.name == output_tool:
if first_block or content_details.has_content():
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
content_details.add_citation_detail()
yield {"role": "assistant"}
first_block = False
elif isinstance(response.content_block, TextBlock):
if ( # Do not start a new assistant content just for citations, concatenate consecutive blocks with citations instead.
first_block
or (
not content_details.has_citations()
and response.content_block.citations is None
and content_details.has_content()
)
):
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
yield {"role": "assistant"}
first_block = False
content_details.add_citation_detail()
if response.content_block.text:
content_details.citation_details[-1].length += len(
response.content_block.text
)
yield {"content": response.content_block.text}
elif isinstance(response.content_block, ThinkingBlock):
if first_block or content_details.thinking_signature:
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
content_details.add_citation_detail()
yield {"role": "assistant"}
first_block = False
elif isinstance(response.content_block, RedactedThinkingBlock):
LOGGER.debug(
"Some of Claudes internal reasoning has been automatically "
"encrypted for safety reasons. This doesnt affect the quality of "
"responses"
)
if first_block or content_details.redacted_thinking:
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
content_details.add_citation_detail()
yield {"role": "assistant"}
first_block = False
content_details.redacted_thinking = response.content_block.data
elif isinstance(response.content_block, ServerToolUseBlock):
current_tool_block = ServerToolUseBlockParam(
type="server_tool_use",
id=response.content_block.id,
name=response.content_block.name,
input=response.content_block.input or {},
)
current_tool_args = ""
elif isinstance(
response.content_block,
(
WebSearchToolResultBlock,
CodeExecutionToolResultBlock,
BashCodeExecutionToolResultBlock,
TextEditorCodeExecutionToolResultBlock,
ToolSearchToolResultBlock,
),
):
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
content_details.add_citation_detail()
yield {
"role": "tool_result",
"tool_call_id": response.content_block.tool_use_id,
"tool_name": response.content_block.type.removesuffix(
"_tool_result"
),
"tool_result": {
"content": cast(
JsonObjectType, response.content_block.to_dict()["content"]
)
}
if isinstance(response.content_block.content, list)
else cast(JsonObjectType, response.content_block.content.to_dict()),
}
if isinstance(content, list)
else cast(JsonObjectType, content.to_dict()),
}
)
self._first_block = True
def on_content_block_delta_event(self, delta: RawContentBlockDelta) -> None:
"""Handle RawContentBlockDeltaEvent."""
if isinstance(delta, InputJSONDelta):
self.on_input_json_delta(delta.partial_json)
return
if isinstance(delta, TextDelta):
self.on_text_delta(delta.text)
return
if isinstance(delta, ThinkingDelta):
self.on_thinking_delta(delta.thinking)
return
if isinstance(delta, SignatureDelta):
self.on_signature_delta(delta.signature)
return
if isinstance(delta, CitationsDelta):
self.on_citations_delta(delta.citation)
return
LOGGER.debug("Unhandled content delta type: %s", delta.type) # type: ignore[unreachable] # pragma: no cover - All types are handled but we want to verify that
def on_input_json_delta(self, partial_json: str) -> None:
"""Handle InputJSONDelta."""
if (
self._current_tool_block is not None
and self._current_tool_block["name"] == self._output_tool
):
self._content_details.citation_details[-1].length += len(partial_json)
self._buffer.append({"content": partial_json})
else:
self._current_tool_args += partial_json
def on_text_delta(self, text: str) -> None:
"""Handle TextDelta."""
if text:
self._content_details.citation_details[-1].length += len(text)
self._buffer.append({"content": text})
def on_thinking_delta(self, thinking: str) -> None:
"""Handle ThinkingDelta."""
if thinking:
self._buffer.append({"thinking_content": thinking})
def on_signature_delta(self, signature: str) -> None:
"""Handle SignatureDelta."""
self._content_details.thinking_signature = signature
def on_citations_delta(self, citation: TextCitation) -> None:
"""Handle CitationsDelta."""
self._content_details.add_citation(citation)
def on_content_block_stop_event(self, index: int) -> None:
"""Handle RawContentBlockStopEvent."""
if self._current_tool_block is not None:
if self._current_tool_block["name"] == self._output_tool:
self._current_tool_block = None
return
tool_args = (
json.loads(self._current_tool_args) if self._current_tool_args else {}
)
self._current_tool_block["input"] |= tool_args
self._buffer.append(
{
first_block = True
elif isinstance(response, RawContentBlockDeltaEvent):
if isinstance(response.delta, InputJSONDelta):
if (
current_tool_block is not None
and current_tool_block["name"] == output_tool
):
content_details.citation_details[-1].length += len(
response.delta.partial_json
)
yield {"content": response.delta.partial_json}
else:
current_tool_args += response.delta.partial_json
elif isinstance(response.delta, TextDelta):
if response.delta.text:
content_details.citation_details[-1].length += len(
response.delta.text
)
yield {"content": response.delta.text}
elif isinstance(response.delta, ThinkingDelta):
if response.delta.thinking:
yield {"thinking_content": response.delta.thinking}
elif isinstance(response.delta, SignatureDelta):
content_details.thinking_signature = response.delta.signature
elif isinstance(response.delta, CitationsDelta):
content_details.add_citation(response.delta.citation)
elif isinstance(response, RawContentBlockStopEvent):
if current_tool_block is not None:
if current_tool_block["name"] == output_tool:
current_tool_block = None
continue
tool_args = json.loads(current_tool_args) if current_tool_args else {}
current_tool_block["input"] |= tool_args
yield {
"tool_calls": [
llm.ToolInput(
id=self._current_tool_block["id"],
tool_name=self._current_tool_block["name"],
tool_args=self._current_tool_block["input"],
external=self._current_tool_block["type"]
== "server_tool_use",
id=current_tool_block["id"],
tool_name=current_tool_block["name"],
tool_args=current_tool_block["input"],
external=current_tool_block["type"] == "server_tool_use",
)
]
}
)
self._current_tool_block = None
current_tool_block = None
elif isinstance(response, RawMessageDeltaEvent):
if (usage := response.usage) is not None:
chat_log.async_trace(_create_token_stats(input_usage, usage))
content_details.container = response.delta.container
if response.delta.stop_reason == "refusal":
raise HomeAssistantError(
translation_domain=DOMAIN, translation_key="api_refusal"
)
elif isinstance(response, RawMessageStopEvent):
if content_details:
content_details.delete_empty()
yield {"native": content_details}
content_details = ContentDetails()
content_details.add_citation_detail()
def on_message_delta_event(self, delta: Delta, usage: MessageDeltaUsage) -> None:
"""Handle RawMessageDeltaEvent."""
self._chat_log.async_trace(self._create_token_stats(self._input_usage, usage))
self._content_details.container = delta.container
if delta.stop_reason == "refusal":
raise HomeAssistantError(
translation_domain=DOMAIN, translation_key="api_refusal"
)
def on_message_stop_event(self) -> None:
"""Handle RawMessageStopEvent."""
if self._content_details:
self._content_details.delete_empty()
self._buffer.append({"native": self._content_details})
self._content_details = ContentDetails()
self._content_details.add_citation_detail()
def _create_token_stats(
self, input_usage: Usage | None, response_usage: MessageDeltaUsage
) -> dict[str, Any]:
"""Create token stats for conversation agent tracing."""
input_tokens = 0
cached_input_tokens = 0
if input_usage:
input_tokens = input_usage.input_tokens
cached_input_tokens = input_usage.cache_creation_input_tokens or 0
output_tokens = response_usage.output_tokens
return {
"stats": {
"input_tokens": input_tokens,
"cached_input_tokens": cached_input_tokens,
"output_tokens": output_tokens,
}
def _create_token_stats(
input_usage: Usage | None, response_usage: MessageDeltaUsage
) -> dict[str, Any]:
"""Create token stats for conversation agent tracing."""
input_tokens = 0
cached_input_tokens = 0
if input_usage:
input_tokens = input_usage.input_tokens
cached_input_tokens = input_usage.cache_creation_input_tokens or 0
output_tokens = response_usage.output_tokens
return {
"stats": {
"input_tokens": input_tokens,
"cached_input_tokens": cached_input_tokens,
"output_tokens": output_tokens,
}
}
class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
@@ -889,14 +703,15 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
entry_type=dr.DeviceEntryType.SERVICE,
)
async def _get_model_args( # noqa: C901
async def _async_handle_chat_log( # noqa: C901
self,
chat_log: conversation.ChatLog,
structure_name: str | None = None,
structure: vol.Schema | None = None,
) -> tuple[MessageCreateParamsStreaming, str | None]:
"""Get the model arguments."""
options: dict[str, Any] = DEFAULT | self.subentry.data
max_iterations: int = MAX_TOOL_ITERATIONS,
) -> None:
"""Generate an answer for the chat log."""
options = self.subentry.data
preloaded_tools = [
"HassTurnOn",
@@ -914,18 +729,21 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
messages, container_id = _convert_content(chat_log.content[1:])
model = options[CONF_CHAT_MODEL]
model = options.get(CONF_CHAT_MODEL, DEFAULT[CONF_CHAT_MODEL])
model_args = MessageCreateParamsStreaming(
model=model,
messages=messages,
max_tokens=options[CONF_MAX_TOKENS],
max_tokens=options.get(CONF_MAX_TOKENS, DEFAULT[CONF_MAX_TOKENS]),
system=system.content,
stream=True,
container=container_id,
)
if options[CONF_PROMPT_CACHING] == PromptCaching.PROMPT:
if (
options.get(CONF_PROMPT_CACHING, DEFAULT[CONF_PROMPT_CACHING])
== PromptCaching.PROMPT
):
model_args["system"] = [
{
"type": "text",
@@ -933,14 +751,19 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
"cache_control": {"type": "ephemeral"},
}
]
elif options[CONF_PROMPT_CACHING] == PromptCaching.AUTOMATIC:
elif (
options.get(CONF_PROMPT_CACHING, DEFAULT[CONF_PROMPT_CACHING])
== PromptCaching.AUTOMATIC
):
model_args["cache_control"] = {"type": "ephemeral"}
if (
self.model_info.capabilities
and self.model_info.capabilities.thinking.types.adaptive.supported
):
thinking_effort = options[CONF_THINKING_EFFORT]
thinking_effort = options.get(
CONF_THINKING_EFFORT, DEFAULT[CONF_THINKING_EFFORT]
)
if thinking_effort != "none":
model_args["thinking"] = ThinkingConfigAdaptiveParam(
type="adaptive", display="summarized"
@@ -949,7 +772,9 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
else:
model_args["thinking"] = ThinkingConfigDisabledParam(type="disabled")
else:
thinking_budget = options[CONF_THINKING_BUDGET]
thinking_budget = options.get(
CONF_THINKING_BUDGET, DEFAULT[CONF_THINKING_BUDGET]
)
if (
self.model_info.capabilities
and self.model_info.capabilities.thinking.types.enabled.supported
@@ -966,7 +791,9 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
and self.model_info.capabilities.effort.supported
):
model_args["output_config"] = OutputConfigParam(
effort=options[CONF_THINKING_EFFORT]
effort=options.get(
CONF_THINKING_EFFORT, DEFAULT[CONF_THINKING_EFFORT]
)
)
tools: list[ToolUnionParam] = []
@@ -976,12 +803,12 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
for tool in chat_log.llm_api.tools
]
if options[CONF_CODE_EXECUTION]:
if options.get(CONF_CODE_EXECUTION):
# The `web_search_20260209` tool automatically enables `code_execution_20260120` tool
if (
not self.model_info.capabilities
or not self.model_info.capabilities.code_execution.supported
or not options[CONF_WEB_SEARCH]
or not options.get(CONF_WEB_SEARCH)
):
tools.append(
CodeExecutionTool20250825Param(
@@ -990,26 +817,26 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
),
)
if options[CONF_WEB_SEARCH]:
if options.get(CONF_WEB_SEARCH):
if (
not self.model_info.capabilities
or not self.model_info.capabilities.code_execution.supported
or not options[CONF_CODE_EXECUTION]
or not options.get(CONF_CODE_EXECUTION)
):
web_search: WebSearchTool20250305Param | WebSearchTool20260209Param = (
WebSearchTool20250305Param(
name="web_search",
type="web_search_20250305",
max_uses=options[CONF_WEB_SEARCH_MAX_USES],
max_uses=options.get(CONF_WEB_SEARCH_MAX_USES),
)
)
else:
web_search = WebSearchTool20260209Param(
name="web_search",
type="web_search_20260209",
max_uses=options[CONF_WEB_SEARCH_MAX_USES],
max_uses=options.get(CONF_WEB_SEARCH_MAX_USES),
)
if options[CONF_WEB_SEARCH_USER_LOCATION]:
if options.get(CONF_WEB_SEARCH_USER_LOCATION):
web_search["user_location"] = {
"type": "approximate",
"city": options.get(CONF_WEB_SEARCH_CITY, ""),
@@ -1110,7 +937,10 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
preloaded_tools.append(structure_name)
if tools:
if options[CONF_TOOL_SEARCH] and len(tools) > len(preloaded_tools) + 1:
if (
options.get(CONF_TOOL_SEARCH, DEFAULT[CONF_TOOL_SEARCH])
and len(tools) > len(preloaded_tools) + 1
):
for tool in tools:
if not tool["name"].endswith(tuple(preloaded_tools)):
tool["defer_loading"] = True
@@ -1123,19 +953,6 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
model_args["tools"] = tools
return model_args, structure_name
async def _async_handle_chat_log(
self,
chat_log: conversation.ChatLog,
structure_name: str | None = None,
structure: vol.Schema | None = None,
max_iterations: int = MAX_TOOL_ITERATIONS,
) -> None:
"""Generate an answer for the chat log."""
model_args, structure_name = await self._get_model_args(
chat_log, structure_name, structure
)
coordinator = self.entry.runtime_data
client = coordinator.client
@@ -1149,7 +966,7 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
content
async for content in chat_log.async_add_delta_content_stream(
self.entity_id,
AnthropicDeltaStream(
_transform_stream(
chat_log,
stream,
output_tool=structure_name or None,
@@ -1157,7 +974,7 @@ class AnthropicBaseLLMEntity(CoordinatorEntity[AnthropicCoordinator]):
)
]
)
cast(list[MessageParam], model_args["messages"]).extend(new_messages)
messages.extend(new_messages)
except anthropic.AuthenticationError as err:
# Trigger coordinator to confirm the auth failure and trigger the reauth flow.
await coordinator.async_request_refresh()
@@ -155,12 +155,7 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
hass.data[DATA_COMPONENT] = storage_collection
collection.DictStorageCollectionWebsocket(
storage_collection,
DOMAIN,
DOMAIN,
CREATE_FIELDS,
UPDATE_FIELDS,
admin_only=True,
storage_collection, DOMAIN, DOMAIN, CREATE_FIELDS, UPDATE_FIELDS
).async_setup(hass)
websocket_api.async_register_command(hass, handle_integration_list)
@@ -346,7 +341,6 @@ async def handle_integration_list(
vol.Required("config_entry_id"): str,
}
)
@websocket_api.require_admin
@websocket_api.async_response
async def handle_config_entry(
hass: HomeAssistant, connection: ActiveConnection, msg: dict[str, Any]
+1 -1
View File
@@ -28,7 +28,7 @@ class AquacellEntity(CoordinatorEntity[AquacellCoordinator]):
self._attr_unique_id = f"{softener_key}-{entity_key}"
self._attr_device_info = DeviceInfo(
name=self.softener.name,
hw_version=self.softener.diagnostics.fw_version,
hw_version=self.softener.fwVersion,
identifiers={(DOMAIN, str(softener_key))},
manufacturer=self.softener.brand,
model=self.softener.ssn,
@@ -8,5 +8,5 @@
"integration_type": "device",
"iot_class": "cloud_polling",
"loggers": ["aioaquacell"],
"requirements": ["aioaquacell==1.0.0"]
"requirements": ["aioaquacell==0.2.0"]
}
+7 -7
View File
@@ -38,39 +38,39 @@ SENSORS: tuple[SoftenerSensorEntityDescription, ...] = (
translation_key="salt_left_side_percentage",
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=PERCENTAGE,
value_fn=lambda softener: softener.salt.left_percent,
value_fn=lambda softener: softener.salt.leftPercent,
),
SoftenerSensorEntityDescription(
key="salt_right_side_percentage",
translation_key="salt_right_side_percentage",
state_class=SensorStateClass.MEASUREMENT,
native_unit_of_measurement=PERCENTAGE,
value_fn=lambda softener: softener.salt.right_percent,
value_fn=lambda softener: softener.salt.rightPercent,
),
SoftenerSensorEntityDescription(
key="salt_left_side_time_remaining",
translation_key="salt_left_side_time_remaining",
device_class=SensorDeviceClass.DURATION,
native_unit_of_measurement=UnitOfTime.DAYS,
value_fn=lambda softener: softener.salt.left_days,
value_fn=lambda softener: softener.salt.leftDays,
),
SoftenerSensorEntityDescription(
key="salt_right_side_time_remaining",
translation_key="salt_right_side_time_remaining",
device_class=SensorDeviceClass.DURATION,
native_unit_of_measurement=UnitOfTime.DAYS,
value_fn=lambda softener: softener.salt.right_days,
value_fn=lambda softener: softener.salt.rightDays,
),
SoftenerSensorEntityDescription(
key="battery",
device_class=SensorDeviceClass.BATTERY,
native_unit_of_measurement=PERCENTAGE,
value_fn=lambda softener: softener.diagnostics.battery,
value_fn=lambda softener: softener.battery,
),
SoftenerSensorEntityDescription(
key="wi_fi_strength",
translation_key="wi_fi_strength",
value_fn=lambda softener: softener.diagnostics.wifi_level,
value_fn=lambda softener: softener.wifiLevel,
device_class=SensorDeviceClass.ENUM,
options=[
"high",
@@ -82,7 +82,7 @@ SENSORS: tuple[SoftenerSensorEntityDescription, ...] = (
key="last_update",
translation_key="last_update",
device_class=SensorDeviceClass.TIMESTAMP,
value_fn=lambda softener: softener.diagnostics.last_update,
value_fn=lambda softener: softener.lastUpdate,
),
)
+18 -29
View File
@@ -4,9 +4,8 @@ from __future__ import annotations
from collections.abc import Callable
from dataclasses import dataclass
import logging
from arcam.fmj import IncomingVideoAspectRatio, IncomingVideoColorspace, IntOrTypeEnum
from arcam.fmj import IncomingVideoAspectRatio, IncomingVideoColorspace
from arcam.fmj.state import IncomingAudioConfig, IncomingAudioFormat, State
from homeassistant.components.sensor import (
@@ -22,25 +21,6 @@ from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from .coordinator import ArcamFmjConfigEntry
from .entity import ArcamFmjEntity
_LOGGER = logging.getLogger(__name__)
def _enum_options(value: type[IntOrTypeEnum]) -> list[str]:
return [
member.name.lower() for member in value if not member.name.startswith("CODE_")
]
def _enum_value(value: IntOrTypeEnum | None) -> str | None:
if value is None:
return None
if value.name.startswith("CODE_"):
_LOGGER.debug("Undefined enum value %s ignored", value)
return None
return value.name.lower()
@dataclass(frozen=True, kw_only=True)
class ArcamFmjSensorEntityDescription(SensorEntityDescription):
@@ -95,9 +75,9 @@ SENSORS: tuple[ArcamFmjSensorEntityDescription, ...] = (
translation_key="incoming_video_aspect_ratio",
entity_category=EntityCategory.DIAGNOSTIC,
device_class=SensorDeviceClass.ENUM,
options=_enum_options(IncomingVideoAspectRatio),
options=[member.name.lower() for member in IncomingVideoAspectRatio],
value_fn=lambda state: (
_enum_value(vp.aspect_ratio)
vp.aspect_ratio.name.lower()
if (vp := state.get_incoming_video_parameters()) is not None
else None
),
@@ -107,10 +87,11 @@ SENSORS: tuple[ArcamFmjSensorEntityDescription, ...] = (
translation_key="incoming_video_colorspace",
entity_category=EntityCategory.DIAGNOSTIC,
device_class=SensorDeviceClass.ENUM,
options=_enum_options(IncomingVideoColorspace),
options=[member.name.lower() for member in IncomingVideoColorspace],
value_fn=lambda state: (
_enum_value(vp.colorspace)
vp.colorspace.name.lower()
if (vp := state.get_incoming_video_parameters()) is not None
and vp.colorspace is not None
else None
),
),
@@ -119,16 +100,24 @@ SENSORS: tuple[ArcamFmjSensorEntityDescription, ...] = (
translation_key="incoming_audio_format",
entity_category=EntityCategory.DIAGNOSTIC,
device_class=SensorDeviceClass.ENUM,
options=_enum_options(IncomingAudioFormat),
value_fn=lambda state: _enum_value(state.get_incoming_audio_format()[0]),
options=[member.name.lower() for member in IncomingAudioFormat],
value_fn=lambda state: (
result.name.lower()
if (result := state.get_incoming_audio_format()[0]) is not None
else None
),
),
ArcamFmjSensorEntityDescription(
key="incoming_audio_config",
translation_key="incoming_audio_config",
entity_category=EntityCategory.DIAGNOSTIC,
device_class=SensorDeviceClass.ENUM,
options=_enum_options(IncomingAudioConfig),
value_fn=lambda state: _enum_value(state.get_incoming_audio_format()[1]),
options=[member.name.lower() for member in IncomingAudioConfig],
value_fn=lambda state: (
result.name.lower()
if (result := state.get_incoming_audio_format()[1]) is not None
else None
),
),
ArcamFmjSensorEntityDescription(
key="incoming_audio_sample_rate",
@@ -945,10 +945,7 @@ class PipelineRun:
try:
# Transcribe audio stream
stt_vad: VoiceCommandSegmenter | None = None
if (
self.audio_settings.is_vad_enabled
and self.stt_provider.audio_processing.requires_external_vad
):
if self.audio_settings.is_vad_enabled:
stt_vad = VoiceCommandSegmenter(
silence_seconds=self.audio_settings.silence_seconds
)
@@ -13,12 +13,11 @@ from hassil.util import (
)
import voluptuous as vol
from homeassistant.auth.permissions.const import CAT_ENTITIES, POLICY_CONTROL
from homeassistant.components.http import StaticPathConfig
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import ATTR_ENTITY_ID
from homeassistant.core import HomeAssistant, ServiceCall, SupportsResponse
from homeassistant.exceptions import HomeAssistantError, Unauthorized, UnknownUser
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.entity_component import EntityComponent
from homeassistant.helpers.typing import ConfigType
@@ -104,22 +103,6 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
async def handle_ask_question(call: ServiceCall) -> dict[str, Any]:
"""Handle a Show View service call."""
satellite_entity_id: str = call.data[ATTR_ENTITY_ID]
if call.context.user_id:
user = await hass.auth.async_get_user(call.context.user_id)
if user is None:
raise UnknownUser(
context=call.context,
permission=POLICY_CONTROL,
user_id=call.context.user_id,
)
if not user.permissions.check_entity(satellite_entity_id, POLICY_CONTROL):
raise Unauthorized(
context=call.context,
permission=POLICY_CONTROL,
user_id=call.context.user_id,
perm_category=CAT_ENTITIES,
)
satellite_entity: AssistSatelliteEntity | None = component.get_entity(
satellite_entity_id
)
@@ -7,8 +7,11 @@
required: true
default: any
selector:
automation_behavior:
mode: condition
select:
translation_key: condition_behavior
options:
- all
- any
for:
required: true
default: 00:00:00
@@ -72,6 +72,19 @@
"id": "Answer ID",
"sentences": "Sentences"
}
},
"condition_behavior": {
"options": {
"all": "All",
"any": "Any"
}
},
"trigger_behavior": {
"options": {
"any": "Any",
"first": "First",
"last": "Last"
}
}
},
"services": {
@@ -7,8 +7,12 @@
required: true
default: any
selector:
automation_behavior:
mode: trigger
select:
options:
- first
- last
- any
translation_key: trigger_behavior
for:
required: true
default: 00:00:00
@@ -165,7 +165,6 @@ async def websocket_set_wake_words(
vol.Required("entity_id"): cv.entity_domain(DOMAIN),
}
)
@websocket_api.require_admin
@websocket_api.async_response
async def websocket_test_connection(
hass: HomeAssistant,
+25 -18
View File
@@ -15,6 +15,24 @@ from homeassistant.data_entry_flow import FlowContext
from homeassistant.helpers import config_validation as cv
from homeassistant.util.hass_dict import HassKey
WS_TYPE_SETUP_MFA = "auth/setup_mfa"
SCHEMA_WS_SETUP_MFA = vol.All(
websocket_api.BASE_COMMAND_MESSAGE_SCHEMA.extend(
{
vol.Required("type"): WS_TYPE_SETUP_MFA,
vol.Exclusive("mfa_module_id", "module_or_flow_id"): str,
vol.Exclusive("flow_id", "module_or_flow_id"): str,
vol.Optional("user_input"): object,
}
),
cv.has_at_least_one_key("mfa_module_id", "flow_id"),
)
WS_TYPE_DEPOSE_MFA = "auth/depose_mfa"
SCHEMA_WS_DEPOSE_MFA = websocket_api.BASE_COMMAND_MESSAGE_SCHEMA.extend(
{vol.Required("type"): WS_TYPE_DEPOSE_MFA, vol.Required("mfa_module_id"): str}
)
DATA_SETUP_FLOW_MGR: HassKey[MfaFlowManager] = HassKey("auth_mfa_setup_flow_manager")
_LOGGER = logging.getLogger(__name__)
@@ -55,24 +73,16 @@ def async_setup(hass: HomeAssistant) -> None:
"""Init mfa setup flow manager."""
hass.data[DATA_SETUP_FLOW_MGR] = MfaFlowManager(hass)
websocket_api.async_register_command(hass, websocket_setup_mfa)
websocket_api.async_register_command(hass, websocket_depose_mfa)
websocket_api.async_register_command(
hass, WS_TYPE_SETUP_MFA, websocket_setup_mfa, SCHEMA_WS_SETUP_MFA
)
websocket_api.async_register_command(
hass, WS_TYPE_DEPOSE_MFA, websocket_depose_mfa, SCHEMA_WS_DEPOSE_MFA
)
@callback
@websocket_api.websocket_command(
vol.All(
vol.Schema(
{
vol.Required("type"): "auth/setup_mfa",
vol.Exclusive("mfa_module_id", "module_or_flow_id"): str,
vol.Exclusive("flow_id", "module_or_flow_id"): str,
vol.Optional("user_input"): object,
}
),
cv.has_at_least_one_key("mfa_module_id", "flow_id"),
)
)
@websocket_api.ws_require_user(allow_system_user=False)
def websocket_setup_mfa(
hass: HomeAssistant, connection: websocket_api.ActiveConnection, msg: dict[str, Any]
@@ -111,9 +121,6 @@ def websocket_setup_mfa(
@callback
@websocket_api.websocket_command(
{vol.Required("type"): "auth/depose_mfa", vol.Required("mfa_module_id"): str}
)
@websocket_api.ws_require_user(allow_system_user=False)
def websocket_depose_mfa(
hass: HomeAssistant, connection: websocket_api.ActiveConnection, msg: dict[str, Any]
@@ -4,10 +4,10 @@ from __future__ import annotations
from abc import ABC, abstractmethod
import asyncio
from collections.abc import Callable
from collections.abc import Callable, Mapping
from dataclasses import dataclass
import logging
from typing import Any, cast
from typing import Any, Protocol, cast
from propcache.api import cached_property
import voluptuous as vol
@@ -194,7 +194,6 @@ _EXPERIMENTAL_TRIGGER_PLATFORMS = {
"switch",
"temperature",
"text",
"timer",
"todo",
"update",
"vacuum",
@@ -230,11 +229,14 @@ def is_disabled_experimental_trigger(hass: HomeAssistant, platform: str) -> bool
)
class IfAction(condition_helper.ConditionsChecker):
class IfAction(Protocol):
"""Define the format of if_action."""
config: list[ConfigType]
def __call__(self, variables: Mapping[str, Any] | None = None) -> bool:
"""AND all conditions."""
def is_on(hass: HomeAssistant, entity_id: str) -> bool:
"""Return true if specified automation entity_id is on.
@@ -833,7 +835,7 @@ class AutomationEntity(BaseAutomationEntity, RestoreEntity):
if (
not skip_condition
and self._condition is not None
and not self._condition.async_check(variables=variables)
and not self._condition(variables)
):
self._logger.debug(
"Conditions not met, aborting automation. Condition summary: %s",
@@ -902,13 +904,6 @@ class AutomationEntity(BaseAutomationEntity, RestoreEntity):
"""Remove listeners when removing automation from Home Assistant."""
await super().async_will_remove_from_hass()
await self._async_disable()
if self.registry_entry and self.registry_entry.entity_id != self.entity_id:
# Entity ID change, do not unload the script or conditions as they will
# be reused.
return
self.action_script.async_unload()
if self._condition is not None:
self._condition.async_unload()
async def _async_enable_automation(self, event: Event) -> None:
"""Start automation on startup."""
@@ -1281,7 +1276,6 @@ async def _async_process_if(
@websocket_api.websocket_command({"type": "automation/config", "entity_id": str})
@websocket_api.require_admin
def websocket_config(
hass: HomeAssistant,
connection: websocket_api.ActiveConnection,
+1 -7
View File
@@ -18,10 +18,4 @@ DEFAULT_STREAM_PROFILE = "No stream profile"
DEFAULT_TRIGGER_TIME = 0
DEFAULT_VIDEO_SOURCE = "No video source"
PLATFORMS = [
Platform.BINARY_SENSOR,
Platform.CAMERA,
Platform.EVENT,
Platform.LIGHT,
Platform.SWITCH,
]
PLATFORMS = [Platform.BINARY_SENSOR, Platform.CAMERA, Platform.LIGHT, Platform.SWITCH]
-62
View File
@@ -1,62 +0,0 @@
"""Support for Axis event entities."""
from __future__ import annotations
from dataclasses import dataclass
from axis.models.event import Event, EventTopic
from homeassistant.components.event import (
DoorbellEventType,
EventDeviceClass,
EventEntity,
EventEntityDescription,
)
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import AxisConfigEntry
from .entity import AxisEventDescription, AxisEventEntity
DOORBELL_CONFIG = ("I8116-E", "0")
@dataclass(frozen=True, kw_only=True)
class AxisEventPlatformDescription(AxisEventDescription, EventEntityDescription):
"""Axis event entity description."""
ENTITY_DESCRIPTIONS = (
AxisEventPlatformDescription(
key="Doorbell",
device_class=EventDeviceClass.DOORBELL,
event_types=[DoorbellEventType.RING],
event_topic=EventTopic.PORT_INPUT,
name_fn=lambda _hub, _event: "Doorbell",
supported_fn=lambda hub, event: (hub.config.model, event.id) == DOORBELL_CONFIG,
),
)
async def async_setup_entry(
hass: HomeAssistant,
config_entry: AxisConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up an Axis event platform."""
config_entry.runtime_data.entity_loader.register_platform(
async_add_entities, AxisEvent, ENTITY_DESCRIPTIONS
)
class AxisEvent(AxisEventEntity, EventEntity):
"""Representation of an Axis event entity."""
entity_description: AxisEventPlatformDescription
@callback
def async_event_callback(self, event: Event) -> None:
"""Handle Axis event updates."""
if event.is_tripped:
self._trigger_event(DoorbellEventType.RING)
self.async_write_ha_state()
-1
View File
@@ -36,7 +36,6 @@ async def get_axis_api(
username=config[CONF_USERNAME],
password=config[CONF_PASSWORD],
web_proto=config.get(CONF_PROTOCOL, "http"),
websocket_enabled=True,
)
)
+1 -1
View File
@@ -29,7 +29,7 @@
"integration_type": "device",
"iot_class": "local_push",
"loggers": ["axis"],
"requirements": ["axis==69"],
"requirements": ["axis==68"],
"ssdp": [
{
"manufacturer": "AXIS"
+3 -6
View File
@@ -2,7 +2,6 @@
from homeassistant.core import HomeAssistant, ServiceCall
from homeassistant.helpers.hassio import is_hassio
from homeassistant.helpers.service import async_register_admin_service
from .const import DATA_MANAGER, DOMAIN
@@ -31,9 +30,7 @@ async def _async_handle_create_automatic_service(call: ServiceCall) -> None:
def async_setup_services(hass: HomeAssistant) -> None:
"""Register services."""
if not is_hassio(hass):
async_register_admin_service(
hass, DOMAIN, "create", _async_handle_create_service
)
async_register_admin_service(
hass, DOMAIN, "create_automatic", _async_handle_create_automatic_service
hass.services.async_register(DOMAIN, "create", _async_handle_create_service)
hass.services.async_register(
DOMAIN, "create_automatic", _async_handle_create_automatic_service
)
@@ -21,9 +21,8 @@ from homeassistant.helpers import config_validation as cv, device_registry as dr
from homeassistant.helpers.typing import ConfigType
from homeassistant.util.ssl import get_default_context
from .const import DOMAIN, MANUFACTURER, BeoModel
from .const import DOMAIN
from .services import async_setup_services
from .util import get_remotes
from .websocket import BeoWebsocket
@@ -59,6 +58,15 @@ async def async_setup_entry(hass: HomeAssistant, entry: BeoConfigEntry) -> bool:
# Remove casts to str
assert entry.unique_id
# Create device now as BeoWebsocket needs a device for debug logging, firing events etc.
device_registry = dr.async_get(hass)
device_registry.async_get_or_create(
config_entry_id=entry.entry_id,
identifiers={(DOMAIN, entry.unique_id)},
name=entry.title,
model=entry.data[CONF_MODEL],
)
client = MozartClient(host=entry.data[CONF_HOST], ssl_context=get_default_context())
# Check API and WebSocket connection
@@ -75,27 +83,6 @@ async def async_setup_entry(hass: HomeAssistant, entry: BeoConfigEntry) -> bool:
await client.close_api_client()
raise ConfigEntryNotReady(f"Unable to connect to {entry.title}") from error
# Create device now as BeoWebsocket needs a device for debug logging, firing events etc.
device_registry = dr.async_get(hass)
device_registry.async_get_or_create(
config_entry_id=entry.entry_id,
identifiers={(DOMAIN, entry.unique_id)},
model=entry.data[CONF_MODEL],
)
# Create devices for paired Beoremote One remotes
for remote in await get_remotes(client):
device_registry.async_get_or_create(
config_entry_id=entry.entry_id,
identifiers={(DOMAIN, f"{remote.serial_number}_{entry.unique_id}")},
name=f"{BeoModel.BEOREMOTE_ONE}-{remote.serial_number}-{entry.unique_id}",
model=BeoModel.BEOREMOTE_ONE,
serial_number=remote.serial_number,
sw_version=remote.app_version,
manufacturer=MANUFACTURER,
via_device=(DOMAIN, entry.unique_id),
)
websocket = BeoWebsocket(hass, entry, client)
# Add the websocket and API client
@@ -52,7 +52,6 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
_beolink_jid = ""
_client: MozartClient
_friendly_name = ""
_host = ""
_model = ""
_name = ""
@@ -112,7 +111,6 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
)
self._beolink_jid = beolink_self.jid
self._friendly_name = beolink_self.friendly_name
self._serial_number = get_serial_number_from_jid(beolink_self.jid)
await self.async_set_unique_id(self._serial_number)
@@ -151,7 +149,6 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
return self.async_abort(reason="invalid_address")
self._model = discovery_info.hostname[:-16].replace("-", " ")
self._friendly_name = discovery_info.properties[ATTR_FRIENDLY_NAME]
self._serial_number = discovery_info.properties[ATTR_SERIAL_NUMBER]
self._beolink_jid = f"{discovery_info.properties[ATTR_TYPE_NUMBER]}.{discovery_info.properties[ATTR_ITEM_NUMBER]}.{self._serial_number}@products.bang-olufsen.com"
@@ -167,13 +164,16 @@ class BeoConfigFlowHandler(ConfigFlow, domain=DOMAIN):
async def _create_entry(self) -> ConfigFlowResult:
"""Create the config entry for a discovered or manually configured Bang & Olufsen device."""
# Ensure that created entities have a unique and easily identifiable id and not a "friendly name"
self._name = f"{self._model}-{self._serial_number}"
return self.async_create_entry(
title=self._friendly_name,
title=self._name,
data=EntryData(
host=self._host,
jid=self._beolink_jid,
model=self._model,
name=self._friendly_name,
name=self._name,
),
)
@@ -20,6 +20,7 @@ from .const import (
CONNECTION_STATUS,
DEVICE_BUTTON_EVENTS,
DOMAIN,
MANUFACTURER,
BeoModel,
WebsocketNotification,
)
@@ -141,6 +142,12 @@ class BeoRemoteKeyEvent(BeoEvent):
self._attr_unique_id = f"{remote.serial_number}_{self._unique_id}_{key_type}"
self._attr_device_info = DeviceInfo(
identifiers={(DOMAIN, f"{remote.serial_number}_{self._unique_id}")},
name=f"{BeoModel.BEOREMOTE_ONE}-{remote.serial_number}-{self._unique_id}",
model=BeoModel.BEOREMOTE_ONE,
serial_number=remote.serial_number,
sw_version=remote.app_version,
manufacturer=MANUFACTURER,
via_device=(DOMAIN, self._unique_id),
)
# Make the native key name Home Assistant compatible
@@ -115,7 +115,7 @@ class BeoSensorRemoteBatteryLevel(BeoSensor):
f"{remote.serial_number}_{self._unique_id}_remote_battery_level"
)
self._attr_device_info = DeviceInfo(
identifiers={(DOMAIN, f"{remote.serial_number}_{self._unique_id}")},
identifiers={(DOMAIN, f"{remote.serial_number}_{self._unique_id}")}
)
self._attr_native_value = remote.battery_level
self._remote = remote
+5 -19
View File
@@ -30,33 +30,19 @@ BATTERY_PERCENTAGE_DOMAIN_SPECS = {
CONDITIONS: dict[str, type[Condition]] = {
"is_low": make_entity_state_condition(
BATTERY_DOMAIN_SPECS,
STATE_ON,
support_duration=True,
primary_entities_only=False,
BATTERY_DOMAIN_SPECS, STATE_ON, support_duration=True
),
"is_not_low": make_entity_state_condition(
BATTERY_DOMAIN_SPECS,
STATE_OFF,
support_duration=True,
primary_entities_only=False,
BATTERY_DOMAIN_SPECS, STATE_OFF, support_duration=True
),
"is_charging": make_entity_state_condition(
BATTERY_CHARGING_DOMAIN_SPECS,
STATE_ON,
support_duration=True,
primary_entities_only=False,
BATTERY_CHARGING_DOMAIN_SPECS, STATE_ON, support_duration=True
),
"is_not_charging": make_entity_state_condition(
BATTERY_CHARGING_DOMAIN_SPECS,
STATE_OFF,
support_duration=True,
primary_entities_only=False,
BATTERY_CHARGING_DOMAIN_SPECS, STATE_OFF, support_duration=True
),
"is_level": make_entity_numerical_condition(
BATTERY_PERCENTAGE_DOMAIN_SPECS,
PERCENTAGE,
primary_entities_only=False,
BATTERY_PERCENTAGE_DOMAIN_SPECS, PERCENTAGE
),
}
@@ -3,14 +3,16 @@
entity:
- domain: binary_sensor
device_class: battery
primary_entities_only: false
fields:
behavior: &condition_behavior
required: true
default: any
selector:
automation_behavior:
mode: condition
select:
translation_key: condition_behavior
options:
- all
- any
for: &condition_for
required: true
default: 00:00:00
@@ -40,7 +42,6 @@ is_charging:
entity:
- domain: binary_sensor
device_class: battery_charging
primary_entities_only: false
fields:
behavior: *condition_behavior
for: *condition_for
@@ -50,7 +51,6 @@ is_not_charging:
entity:
- domain: binary_sensor
device_class: battery_charging
primary_entities_only: false
fields:
behavior: *condition_behavior
for: *condition_for
@@ -60,7 +60,6 @@ is_level:
entity:
- domain: sensor
device_class: battery
primary_entities_only: false
fields:
behavior: *condition_behavior
threshold:
@@ -69,6 +69,21 @@
"name": "Battery is not low"
}
},
"selector": {
"condition_behavior": {
"options": {
"all": "All",
"any": "Any"
}
},
"trigger_behavior": {
"options": {
"any": "Any",
"first": "First",
"last": "Last"
}
}
},
"title": "Battery",
"triggers": {
"level_changed": {
+6 -14
View File
@@ -32,27 +32,19 @@ BATTERY_PERCENTAGE_DOMAIN_SPECS: dict[str, DomainSpec] = {
}
TRIGGERS: dict[str, type[Trigger]] = {
"low": make_entity_target_state_trigger(
BATTERY_LOW_DOMAIN_SPECS, STATE_ON, primary_entities_only=False
),
"not_low": make_entity_target_state_trigger(
BATTERY_LOW_DOMAIN_SPECS, STATE_OFF, primary_entities_only=False
),
"low": make_entity_target_state_trigger(BATTERY_LOW_DOMAIN_SPECS, STATE_ON),
"not_low": make_entity_target_state_trigger(BATTERY_LOW_DOMAIN_SPECS, STATE_OFF),
"started_charging": make_entity_target_state_trigger(
BATTERY_CHARGING_DOMAIN_SPECS, STATE_ON, primary_entities_only=False
BATTERY_CHARGING_DOMAIN_SPECS, STATE_ON
),
"stopped_charging": make_entity_target_state_trigger(
BATTERY_CHARGING_DOMAIN_SPECS, STATE_OFF, primary_entities_only=False
BATTERY_CHARGING_DOMAIN_SPECS, STATE_OFF
),
"level_changed": make_entity_numerical_state_changed_trigger(
BATTERY_PERCENTAGE_DOMAIN_SPECS,
valid_unit="%",
primary_entities_only=False,
BATTERY_PERCENTAGE_DOMAIN_SPECS, valid_unit="%"
),
"level_crossed_threshold": make_entity_numerical_state_crossed_threshold_trigger(
BATTERY_PERCENTAGE_DOMAIN_SPECS,
valid_unit="%",
primary_entities_only=False,
BATTERY_PERCENTAGE_DOMAIN_SPECS, valid_unit="%"
),
}
@@ -3,8 +3,12 @@
required: true
default: any
selector:
automation_behavior:
mode: trigger
select:
translation_key: trigger_behavior
options:
- first
- last
- any
for: &trigger_for
required: true
default: 00:00:00
@@ -29,19 +33,16 @@
entity:
- domain: binary_sensor
device_class: battery
primary_entities_only: false
.trigger_target_charging: &trigger_target_charging
entity:
- domain: binary_sensor
device_class: battery_charging
primary_entities_only: false
.trigger_target_percentage: &trigger_target_percentage
entity:
- domain: sensor
device_class: battery
primary_entities_only: false
low:
fields:
@@ -33,13 +33,11 @@ from homeassistant.components.update import DOMAIN as UPDATE_DOMAIN
from homeassistant.components.weather import DOMAIN as WEATHER_DOMAIN
from homeassistant.components.zone import DOMAIN as ZONE_DOMAIN
from homeassistant.config_entries import (
SOURCE_USER,
ConfigEntry,
ConfigFlowResult,
ConfigSubentry,
ConfigSubentryData,
ConfigSubentryFlow,
FlowType,
SubentryFlowContext,
SubentryFlowResult,
)
from homeassistant.const import (
@@ -64,6 +62,7 @@ from homeassistant.helpers.schema_config_entry_flow import (
from .binary_sensor import above_greater_than_below, no_overlapping
from .const import (
CONF_OBSERVATIONS,
CONF_P_GIVEN_F,
CONF_P_GIVEN_T,
CONF_PRIOR,
@@ -374,6 +373,26 @@ def _validate_observation_subentry(
return user_input
async def _validate_subentry_from_config_entry(
handler: SchemaCommonFlowHandler, user_input: dict[str, Any]
) -> dict[str, Any]:
# Standard behavior is to merge the result with the options.
# In this case, we want to add a subentry so we update the options directly.
observations: list[dict[str, Any]] = handler.options.setdefault(
CONF_OBSERVATIONS, []
)
if handler.parent_handler.cur_step is not None:
user_input[CONF_PLATFORM] = handler.parent_handler.cur_step["step_id"]
user_input = _validate_observation_subentry(
user_input[CONF_PLATFORM],
user_input,
other_subentries=handler.options[CONF_OBSERVATIONS],
)
observations.append(user_input)
return {}
async def _get_description_placeholders(
handler: SchemaCommonFlowHandler,
) -> dict[str, str]:
@@ -401,12 +420,48 @@ async def _get_description_placeholders(
}
async def _get_observation_menu_options(handler: SchemaCommonFlowHandler) -> list[str]:
"""Return the menu options for the observation selector."""
options = [typ.value for typ in ObservationTypes]
if handler.options.get(CONF_OBSERVATIONS):
options.append("finish")
return options
CONFIG_FLOW: dict[str, SchemaFlowMenuStep | SchemaFlowFormStep] = {
str(USER): SchemaFlowFormStep(
CONFIG_SCHEMA,
validate_user_input=_validate_user,
next_step=str(OBSERVATION_SELECTOR),
description_placeholders=_get_description_placeholders,
)
),
str(OBSERVATION_SELECTOR): SchemaFlowMenuStep(
_get_observation_menu_options,
),
str(ObservationTypes.STATE): SchemaFlowFormStep(
STATE_SUBSCHEMA,
next_step=str(OBSERVATION_SELECTOR),
validate_user_input=_validate_subentry_from_config_entry,
# Prevent the name of the bayesian sensor from being used as the suggested
# name of the observations
suggested_values=None,
description_placeholders=_get_description_placeholders,
),
str(ObservationTypes.NUMERIC_STATE): SchemaFlowFormStep(
NUMERIC_STATE_SUBSCHEMA,
next_step=str(OBSERVATION_SELECTOR),
validate_user_input=_validate_subentry_from_config_entry,
suggested_values=None,
description_placeholders=_get_description_placeholders,
),
str(ObservationTypes.TEMPLATE): SchemaFlowFormStep(
TEMPLATE_SUBSCHEMA,
next_step=str(OBSERVATION_SELECTOR),
validate_user_input=_validate_subentry_from_config_entry,
suggested_values=None,
description_placeholders=_get_description_placeholders,
),
"finish": SchemaFlowFormStep(),
}
@@ -442,17 +497,27 @@ class BayesianConfigFlowHandler(SchemaConfigFlowHandler, domain=DOMAIN):
name: str = options[CONF_NAME]
return name
async def async_on_create_entry(self, result: ConfigFlowResult) -> ConfigFlowResult:
"""Start subentry flow when config entry has been created."""
subentry_result = await self.hass.config_entries.subentries.async_init(
(result["result"].entry_id, "observation"),
context=SubentryFlowContext(source=SOURCE_USER),
)
result["next_flow"] = (
FlowType.CONFIG_SUBENTRIES_FLOW,
subentry_result["flow_id"],
)
return result
@callback
def async_create_entry(
self,
data: Mapping[str, Any],
**kwargs: Any,
) -> ConfigFlowResult:
"""Finish config flow and create a config entry."""
data = dict(data)
observations = data.pop(CONF_OBSERVATIONS)
subentries: list[ConfigSubentryData] = [
ConfigSubentryData(
data=observation,
title=observation[CONF_NAME],
subentry_type="observation",
unique_id=None,
)
for observation in observations
]
self.async_config_flow_finished(data)
return super().async_create_entry(data=data, subentries=subentries, **kwargs)
class ObservationSubentryFlowHandler(ConfigSubentryFlow):
@@ -21,6 +21,9 @@
"save_video": {
"service": "mdi:file-video"
},
"send_pin": {
"service": "mdi:two-factor-authentication"
},
"trigger_camera": {
"service": "mdi:image-refresh"
}
+49 -3
View File
@@ -5,9 +5,15 @@ from __future__ import annotations
import voluptuous as vol
from homeassistant.components.camera import DOMAIN as CAMERA_DOMAIN
from homeassistant.const import CONF_FILE_PATH, CONF_FILENAME
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers import config_validation as cv, service
from homeassistant.const import (
ATTR_CONFIG_ENTRY_ID,
CONF_FILE_PATH,
CONF_FILENAME,
CONF_PIN,
)
from homeassistant.core import HomeAssistant, ServiceCall, callback
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import config_validation as cv, issue_registry as ir, service
from .const import DOMAIN
@@ -17,10 +23,50 @@ SERVICE_SAVE_VIDEO = "save_video"
SERVICE_SAVE_RECENT_CLIPS = "save_recent_clips"
# Deprecated
SERVICE_SEND_PIN = "send_pin"
SERVICE_SEND_PIN_SCHEMA = vol.Schema(
{
vol.Required(ATTR_CONFIG_ENTRY_ID): vol.All(cv.ensure_list, [cv.string]),
vol.Optional(CONF_PIN): cv.string,
}
)
async def _send_pin(call: ServiceCall) -> None:
"""Call blink to send new pin."""
# Create repair issue to inform user about service removal
ir.async_create_issue(
call.hass,
DOMAIN,
"service_send_pin_deprecation",
is_fixable=False,
issue_domain=DOMAIN,
severity=ir.IssueSeverity.ERROR,
breaks_in_ha_version="2026.5.0",
translation_key="service_send_pin_deprecation",
translation_placeholders={"service_name": f"{DOMAIN}.{SERVICE_SEND_PIN}"},
)
# Service has been removed - raise exception
raise HomeAssistantError(
translation_domain=DOMAIN,
translation_key="service_removed",
translation_placeholders={"service_name": f"{DOMAIN}.{SERVICE_SEND_PIN}"},
)
@callback
def async_setup_services(hass: HomeAssistant) -> None:
"""Set up the services for the Blink integration."""
hass.services.async_register(
DOMAIN,
SERVICE_SEND_PIN,
_send_pin,
schema=SERVICE_SEND_PIN_SCHEMA,
)
service.async_register_platform_entity_service(
hass,
DOMAIN,
@@ -35,3 +35,15 @@ save_recent_clips:
example: "/tmp"
selector:
text:
send_pin:
fields:
config_entry_id:
required: true
selector:
config_entry:
integration: blink
pin:
example: "abc123"
selector:
text:
@@ -82,6 +82,9 @@
},
"not_loaded": {
"message": "{target} is not loaded."
},
"service_removed": {
"message": "The service {service_name} has been removed and is no longer needed. Home Assistant will automatically prompt for reauthentication when required."
}
},
"issues": {
@@ -95,6 +98,10 @@
}
},
"title": "Blink update service is being removed"
},
"service_send_pin_deprecation": {
"description": "The service {service_name} has been removed and is no longer needed. When a new two-factor authentication code is required, Home Assistant will automatically prompt you to reauthenticate through the integration configuration. Please remove any automations or scripts that call this service.",
"title": "Blink send PIN service has been removed"
}
},
"options": {
@@ -133,6 +140,20 @@
},
"name": "Save video"
},
"send_pin": {
"description": "Sends a new PIN to Blink for 2FA.",
"fields": {
"config_entry_id": {
"description": "The Blink integration ID.",
"name": "Integration ID"
},
"pin": {
"description": "PIN received from Blink. Leave empty if you only received a verification email.",
"name": "PIN"
}
},
"name": "Send PIN"
},
"trigger_camera": {
"description": "Requests camera to take new image.",
"name": "Trigger camera"
@@ -58,7 +58,6 @@ from .api import (
async_address_present,
async_ble_device_from_address,
async_clear_address_from_match_history,
async_clear_advertisement_history,
async_current_scanners,
async_discovered_service_info,
async_get_advertisement_callback,
@@ -117,7 +116,6 @@ __all__ = [
"async_address_present",
"async_ble_device_from_address",
"async_clear_address_from_match_history",
"async_clear_advertisement_history",
"async_current_scanners",
"async_discovered_service_info",
"async_get_advertisement_callback",
-13
View File
@@ -207,19 +207,6 @@ def async_clear_address_from_match_history(hass: HomeAssistant, address: str) ->
_get_manager(hass).async_clear_address_from_match_history(address)
@hass_callback
def async_clear_advertisement_history(hass: HomeAssistant, address: str) -> None:
"""Clear cached advertisement history for a device.
Causes the next advertisement from this address to be treated as new
data, bypassing the change-detection guard in the Bluetooth manager.
Intended for devices that emit static advertisements as a wake-up
signal, for example, devices that require an active GATT connection
to read sensor data and whose advertisement payload never changes.
"""
_get_manager(hass).async_clear_advertisement_history(address)
@hass_callback
def async_register_scanner(
hass: HomeAssistant,
@@ -1,5 +1,4 @@
"""The Broadlink integration."""
# pylint: disable=hass-use-runtime-data # Uses legacy hass.data[DOMAIN] pattern
from __future__ import annotations
@@ -34,8 +34,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Broadlink climate entities."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
if device.api.type in DOMAINS_AND_TYPES[Platform.CLIMATE]:
@@ -7,7 +7,6 @@ DOMAIN = "broadlink"
DOMAINS_AND_TYPES = {
Platform.CLIMATE: {"HYS"},
Platform.LIGHT: {"LB1", "LB2"},
Platform.RADIO_FREQUENCY: {"RM4PRO", "RMPRO"},
Platform.REMOTE: {"RM4MINI", "RM4PRO", "RMMINI", "RMMINIB", "RMPRO"},
Platform.SELECT: {"HYS"},
Platform.SENSOR: {
@@ -133,8 +133,6 @@ class BroadlinkDevice[_ApiT: blk.Device = blk.Device]:
await coordinator.async_config_entry_first_refresh()
self.update_manager = update_manager
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
self.hass.data[DOMAIN].devices[config.entry_id] = self
self.reset_jobs.append(config.add_update_listener(self.async_update))
@@ -32,8 +32,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Broadlink light."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
lights = []
@@ -1,132 +0,0 @@
"""Radio Frequency platform for Broadlink."""
from __future__ import annotations
import logging
from broadlink.exceptions import BroadlinkException
from rf_protocols import RadioFrequencyCommand
from homeassistant.components.radio_frequency import RadioFrequencyTransmitterEntity
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from .const import DOMAIN
from .device import BroadlinkDevice
from .entity import BroadlinkEntity
_LOGGER = logging.getLogger(__name__)
PARALLEL_UPDATES = 0
_TICK_US = 32.84
_RF_433_TYPE_BYTE = 0xB2
_RF_315_TYPE_BYTE = 0xB4
_RF_433_RANGE = (433_050_000, 434_790_000)
_RF_315_RANGE = (314_950_000, 315_250_000)
SUPPORTED_FREQUENCY_RANGES: list[tuple[int, int]] = [_RF_433_RANGE, _RF_315_RANGE]
def _type_byte_for_frequency(frequency: int) -> int:
"""Return the Broadlink RF type byte for a given carrier frequency."""
if _RF_433_RANGE[0] <= frequency <= _RF_433_RANGE[1]:
return _RF_433_TYPE_BYTE
if _RF_315_RANGE[0] <= frequency <= _RF_315_RANGE[1]:
return _RF_315_TYPE_BYTE
raise HomeAssistantError(
translation_domain=DOMAIN,
translation_key="frequency_not_supported",
translation_placeholders={"frequency": f"{frequency / 1_000_000:g}"},
)
def encode_rf_packet(
*,
type_byte: int,
repeat_count: int,
timings_us: list[int],
) -> bytes:
"""Encode raw OOK timings as a Broadlink RF pulse-length packet.
The layout is::
byte 0 type byte (0xB2 for 433 MHz, 0xB4 for 315 MHz)
byte 1 repeat count (additional transmissions after the first)
bytes 2..3 payload length (little-endian), counted from byte 4
bytes 4..N-1 pulses: 1 byte when ticks < 256, otherwise
0x00 followed by a 2-byte big-endian tick count
Each pulse is expressed as multiples of 32.84 µs ticks, which is the
timing resolution of the Broadlink RF front-end.
"""
buf = bytearray([type_byte, repeat_count, 0, 0])
for duration in timings_us:
ticks = round(abs(duration) / _TICK_US)
div, mod = divmod(ticks, 256)
if div:
buf.append(0x00)
buf.append(div)
buf.append(mod)
payload_len = len(buf) - 4
buf[2] = payload_len & 0xFF
buf[3] = (payload_len >> 8) & 0xFF
return bytes(buf)
async def async_setup_entry(
hass: HomeAssistant,
config_entry: ConfigEntry,
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up a Broadlink radio frequency transmitter."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device: BroadlinkDevice = hass.data[DOMAIN].devices[config_entry.entry_id]
async_add_entities([BroadlinkRadioFrequency(device)])
class BroadlinkRadioFrequency(BroadlinkEntity, RadioFrequencyTransmitterEntity):
"""Representation of a Broadlink RF transmitter."""
_attr_has_entity_name = True
_attr_name = None
def __init__(self, device: BroadlinkDevice) -> None:
"""Initialize the entity."""
super().__init__(device)
self._attr_unique_id = device.unique_id
@property
def supported_frequency_ranges(self) -> list[tuple[int, int]]:
"""Return the Broadlink-supported narrow RF bands."""
return SUPPORTED_FREQUENCY_RANGES
async def async_send_command(self, command: RadioFrequencyCommand) -> None:
"""Encode an OOK command and transmit it via the Broadlink device."""
type_byte = _type_byte_for_frequency(command.frequency)
packet = encode_rf_packet(
type_byte=type_byte,
repeat_count=command.repeat_count,
timings_us=command.get_raw_timings(),
)
_LOGGER.debug(
"Transmitting RF packet: %d bytes on %d Hz (repeat=%d)",
len(packet),
command.frequency,
command.repeat_count,
)
device = self._device
try:
await device.async_request(device.api.send_data, packet)
except (BroadlinkException, OSError) as err:
raise HomeAssistantError(
translation_domain=DOMAIN,
translation_key="transmit_failed",
translation_placeholders={"error": str(err)},
) from err
@@ -95,8 +95,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up a Broadlink remote."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
remote = BroadlinkRemote(
device,
@@ -31,8 +31,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Broadlink select."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
async_add_entities([BroadlinkDayOfWeek(device)])
@@ -108,8 +108,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Broadlink sensor."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
sensor_data = device.update_manager.coordinator.data
sensors = [
@@ -77,13 +77,5 @@
"name": "Total consumption"
}
}
},
"exceptions": {
"frequency_not_supported": {
"message": "Broadlink devices cannot transmit on {frequency} MHz"
},
"transmit_failed": {
"message": "Failed to transmit RF command: {error}"
}
}
}
@@ -1,5 +1,4 @@
"""Support for Broadlink switches."""
# pylint: disable=hass-use-runtime-data # Uses legacy hass.data[DOMAIN] pattern
from __future__ import annotations
@@ -22,8 +22,6 @@ async def async_setup_entry(
async_add_entities: AddConfigEntryEntitiesCallback,
) -> None:
"""Set up the Broadlink time."""
# Uses legacy hass.data[DOMAIN] pattern
# pylint: disable-next=hass-use-runtime-data
device = hass.data[DOMAIN].devices[config_entry.entry_id]
async_add_entities([BroadlinkTime(device)])
+2 -1
View File
@@ -293,8 +293,9 @@ SENSOR_TYPES: tuple[BrotherSensorEntityDescription, ...] = (
),
BrotherSensorEntityDescription(
key="uptime",
translation_key="last_restart",
entity_registry_enabled_default=False,
device_class=SensorDeviceClass.UPTIME,
device_class=SensorDeviceClass.TIMESTAMP,
entity_category=EntityCategory.DIAGNOSTIC,
value=lambda data: data.uptime,
),
@@ -151,6 +151,9 @@
"laser_remaining_life": {
"name": "Laser remaining lifetime"
},
"last_restart": {
"name": "Last restart"
},
"magenta_drum_page_counter": {
"name": "Magenta drum page counter",
"unit_of_measurement": "[%key:component::brother::entity::sensor::page_counter::unit_of_measurement%]"
+15 -131
View File
@@ -13,7 +13,6 @@ from bsblan import (
Info,
StaticState,
)
from yarl import URL
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import (
@@ -29,16 +28,11 @@ from homeassistant.exceptions import (
ConfigEntryError,
ConfigEntryNotReady,
)
from homeassistant.helpers import config_validation as cv, device_registry as dr
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.device_registry import (
CONNECTION_NETWORK_MAC,
DeviceInfo,
format_mac,
)
from homeassistant.helpers.typing import ConfigType
from .const import CONF_HEATING_CIRCUITS, CONF_PASSKEY, DEFAULT_PORT, DOMAIN, LOGGER
from .const import CONF_PASSKEY, DOMAIN, LOGGER
from .coordinator import BSBLanFastCoordinator, BSBLanSlowCoordinator
from .services import async_setup_services
@@ -58,35 +52,7 @@ class BSBLanData:
client: BSBLAN
device: Device
info: Info
static: dict[int, StaticState | None]
available_circuits: list[int]
def get_bsblan_device_info(
device: Device, info: Info, host: str, port: int
) -> DeviceInfo:
"""Build DeviceInfo for the main BSB-LAN controller device."""
return DeviceInfo(
identifiers={(DOMAIN, device.MAC)},
connections={(CONNECTION_NETWORK_MAC, format_mac(device.MAC))},
name=device.name,
manufacturer="BSBLAN Inc.",
model=(
info.device_identification.value
if info.device_identification and info.device_identification.value
else None
),
model_id=(
f"{info.controller_family.value}_{info.controller_variant.value}"
if info.controller_family
and info.controller_variant
and info.controller_family.value
and info.controller_variant.value
else None
),
sw_version=device.version,
configuration_url=str(URL.build(scheme="http", host=host, port=port)),
)
static: StaticState | None
async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
@@ -109,17 +75,13 @@ async def async_setup_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bo
# create BSBLAN client
session = async_get_clientsession(hass)
bsblan = BSBLAN(config=config, session=session)
bsblan = BSBLAN(config, session)
try:
# Initialize the client first - this sets up internal caches and validates
# the connection by fetching firmware version
await bsblan.initialize()
# Read available heating circuits from config entry data
# (populated by config flow or migration)
circuits: list[int] = entry.data[CONF_HEATING_CIRCUITS]
# Fetch required device metadata in parallel for faster startup
device, info = await asyncio.gather(
bsblan.device(),
@@ -148,25 +110,18 @@ async def async_setup_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bo
translation_key="setup_general_error",
) from err
# Fetch static values per configured circuit.
# BSB-LAN is a serial bus — it processes one parameter at a time,
# so concurrent requests offer no speed benefit over sequential.
# Static values are optional — some devices may not support them.
static_per_circuit: dict[int, StaticState | None] = {}
for circuit in circuits:
try:
static_per_circuit[circuit] = await bsblan.static_values(circuit=circuit)
except (BSBLANError, TimeoutError) as err:
LOGGER.debug(
"Static values not available for %s circuit %d: %s",
entry.data[CONF_HOST],
circuit,
err,
)
static_per_circuit[circuit] = None
try:
static = await bsblan.static_values()
except (BSBLANError, TimeoutError) as err:
LOGGER.debug(
"Static values not available for %s: %s",
entry.data[CONF_HOST],
err,
)
static = None
# Create coordinators with the already-initialized client
fast_coordinator = BSBLanFastCoordinator(hass, entry, bsblan, circuits)
fast_coordinator = BSBLanFastCoordinator(hass, entry, bsblan)
slow_coordinator = BSBLanSlowCoordinator(hass, entry, bsblan)
# Perform first refresh of fast coordinator (required for entities)
@@ -182,25 +137,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bo
slow_coordinator=slow_coordinator,
device=device,
info=info,
static=static_per_circuit,
available_circuits=circuits,
)
# Register main device before forwarding platforms, so sub-devices
# (heating circuits, water heater) can reference it via via_device
device_registry = dr.async_get(hass)
port = entry.data.get(CONF_PORT, DEFAULT_PORT)
main_device_info = get_bsblan_device_info(device, info, entry.data[CONF_HOST], port)
device_registry.async_get_or_create(
config_entry_id=entry.entry_id,
identifiers=main_device_info["identifiers"],
connections=main_device_info["connections"],
name=main_device_info["name"],
manufacturer=main_device_info["manufacturer"],
model=main_device_info.get("model"),
model_id=main_device_info.get("model_id"),
sw_version=main_device_info.get("sw_version"),
configuration_url=main_device_info.get("configuration_url"),
static=static,
)
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)
@@ -211,56 +148,3 @@ async def async_setup_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bo
async def async_unload_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bool:
"""Unload BSBLAN config entry."""
return await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
async def async_migrate_entry(hass: HomeAssistant, entry: BSBLanConfigEntry) -> bool:
"""Migrate old config entries to the latest schema."""
LOGGER.debug(
"Migrating BSB-LAN entry from version %s.%s",
entry.version,
entry.minor_version,
)
if entry.version > 1:
# Downgraded from a future version; cannot migrate.
return False
# 1.1 -> 1.2: Add CONF_HEATING_CIRCUITS. Attempt to discover available
# heating circuits from the device; fall back to [1] (pre-multi-circuit
# default) if the device is unreachable or the endpoint is unsupported.
if entry.version == 1 and entry.minor_version < 2:
circuits: list[int] = [1]
config = BSBLANConfig(
host=entry.data[CONF_HOST],
passkey=entry.data[CONF_PASSKEY],
port=entry.data[CONF_PORT],
username=entry.data.get(CONF_USERNAME),
password=entry.data.get(CONF_PASSWORD),
)
session = async_get_clientsession(hass)
bsblan = BSBLAN(config=config, session=session)
try:
await bsblan.initialize()
circuits = await bsblan.get_available_circuits()
except (BSBLANError, TimeoutError) as err:
LOGGER.warning(
"Circuit discovery during migration failed for %s (%s); "
"defaulting to single circuit [1]. Use Reconfigure to "
"rediscover additional circuits later",
entry.data[CONF_HOST],
err,
)
hass.config_entries.async_update_entry(
entry,
data={**entry.data, CONF_HEATING_CIRCUITS: circuits},
minor_version=2,
)
LOGGER.debug(
"Migrated BSB-LAN entry to version %s.%s with circuits %s",
entry.version,
entry.minor_version,
circuits,
)
return True
+15 -28
View File
@@ -4,7 +4,7 @@ from __future__ import annotations
from typing import Any, Final
from bsblan import BSBLANError, State, get_hvac_action_category
from bsblan import BSBLANError, get_hvac_action_category
from homeassistant.components.climate import (
ATTR_HVAC_MODE,
@@ -24,7 +24,7 @@ from homeassistant.helpers.entity_platform import AddConfigEntryEntitiesCallback
from . import BSBLanConfigEntry, BSBLanData
from .const import ATTR_TARGET_TEMPERATURE, DOMAIN
from .entity import BSBLanCircuitEntity
from .entity import BSBLanEntity
PARALLEL_UPDATES = 1
@@ -63,12 +63,10 @@ async def async_setup_entry(
) -> None:
"""Set up BSBLAN device based on a config entry."""
data = entry.runtime_data
async_add_entities(
BSBLANClimate(data, circuit) for circuit in data.available_circuits
)
async_add_entities([BSBLANClimate(data)])
class BSBLANClimate(BSBLanCircuitEntity, ClimateEntity):
class BSBLANClimate(BSBLanEntity, ClimateEntity):
"""Defines a BSBLAN climate device."""
_attr_name = None
@@ -86,50 +84,37 @@ class BSBLANClimate(BSBLanCircuitEntity, ClimateEntity):
def __init__(
self,
data: BSBLanData,
circuit: int,
) -> None:
"""Initialize BSBLAN climate device."""
super().__init__(data.fast_coordinator, data, circuit)
self._circuit = circuit
mac = format_mac(data.device.MAC)
super().__init__(data.fast_coordinator, data)
self._attr_unique_id = f"{format_mac(data.device.MAC)}-climate"
# Backward compatible unique ID: circuit 1 keeps old format
if circuit == 1:
self._attr_unique_id = f"{mac}-climate"
else:
self._attr_unique_id = f"{mac}-climate-{circuit}"
# Set temperature range from per-circuit static data
if (static := data.static.get(circuit)) is not None:
# Set temperature range if available, otherwise use Home Assistant defaults
if (static := data.static) is not None:
if (min_temp := static.min_temp) is not None and min_temp.value is not None:
self._attr_min_temp = min_temp.value
if (max_temp := static.max_temp) is not None and max_temp.value is not None:
self._attr_max_temp = max_temp.value
self._attr_temperature_unit = data.fast_coordinator.client.get_temperature_unit
@property
def _circuit_state(self) -> State:
"""Return the state for this circuit."""
return self.coordinator.data.states[self._circuit]
@property
def current_temperature(self) -> float | None:
"""Return the current temperature."""
if (current_temp := self._circuit_state.current_temperature) is None:
if (current_temp := self.coordinator.data.state.current_temperature) is None:
return None
return current_temp.value
@property
def target_temperature(self) -> float | None:
"""Return the temperature we try to reach."""
if (target_temp := self._circuit_state.target_temperature) is None:
if (target_temp := self.coordinator.data.state.target_temperature) is None:
return None
return target_temp.value
@property
def _hvac_mode_value(self) -> int | None:
"""Return the raw hvac_mode value from the coordinator."""
if (hvac_mode := self._circuit_state.hvac_mode) is None:
if (hvac_mode := self.coordinator.data.state.hvac_mode) is None:
return None
return hvac_mode.value
@@ -143,7 +128,9 @@ class BSBLANClimate(BSBLanCircuitEntity, ClimateEntity):
@property
def hvac_action(self) -> HVACAction | None:
"""Return the current running hvac action."""
if (action := self._circuit_state.hvac_action) is None or action.value is None:
if (
action := self.coordinator.data.state.hvac_action
) is None or action.value is None:
return None
category = get_hvac_action_category(action.value)
return HVACAction(category.name.lower())
@@ -183,7 +170,7 @@ class BSBLANClimate(BSBLanCircuitEntity, ClimateEntity):
data[ATTR_HVAC_MODE] = 1
try:
await self.coordinator.client.thermostat(**data, circuit=self._circuit)
await self.coordinator.client.thermostat(**data)
except BSBLANError as err:
raise HomeAssistantError(
"An error occurred while updating the BSBLAN device",
+5 -38
View File
@@ -15,21 +15,19 @@ from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.device_registry import format_mac
from homeassistant.helpers.service_info.zeroconf import ZeroconfServiceInfo
from .const import CONF_HEATING_CIRCUITS, CONF_PASSKEY, DEFAULT_PORT, DOMAIN, LOGGER
from .const import CONF_PASSKEY, DEFAULT_PORT, DOMAIN
class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
"""Handle a BSBLAN config flow."""
VERSION = 1
MINOR_VERSION = 2
def __init__(self) -> None:
"""Initialize BSBLan flow."""
self.host: str = ""
self.port: int = DEFAULT_PORT
self.mac: str | None = None
self.circuits: list[int] = [1]
self.passkey: str | None = None
self.username: str | None = None
self.password: str | None = None
@@ -79,7 +77,7 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
# Try to get device info without authentication to minimize discovery popup
config = BSBLANConfig(host=self.host, port=self.port)
session = async_get_clientsession(self.hass)
bsblan = BSBLAN(config=config, session=session)
bsblan = BSBLAN(config, session)
try:
device = await bsblan.device()
except BSBLANError:
@@ -125,8 +123,6 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
)
if not self._auth_required:
# Discover available heating circuits
await self._discover_circuits()
return self._async_create_entry()
self.passkey = user_input.get(CONF_PASSKEY)
@@ -141,7 +137,6 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
"""Validate device connection and create entry."""
try:
await self._get_bsblan_info()
await self._discover_circuits()
except BSBLANAuthError:
if is_discovery:
return self.async_show_form(
@@ -235,12 +230,9 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
# it gets the unique ID from the device info when it validates credentials
self._abort_if_unique_id_mismatch()
# Rediscover circuits in case hardware changed
await self._discover_circuits()
return self.async_update_reload_and_abort(
existing_entry,
data_updates={**user_input, CONF_HEATING_CIRCUITS: self.circuits},
data_updates=user_input,
reason="reconfigure_successful",
)
@@ -324,14 +316,13 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
def _async_create_entry(self) -> ConfigFlowResult:
"""Create the config entry."""
return self.async_create_entry(
title="BSB-LAN",
title=format_mac(self.mac),
data={
CONF_HOST: self.host,
CONF_PORT: self.port,
CONF_PASSKEY: self.passkey,
CONF_USERNAME: self.username,
CONF_PASSWORD: self.password,
CONF_HEATING_CIRCUITS: self.circuits,
},
)
@@ -349,7 +340,7 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
password=self.password,
)
session = async_get_clientsession(self.hass)
bsblan = BSBLAN(config=config, session=session)
bsblan = BSBLAN(config, session)
device = await bsblan.device()
retrieved_mac = device.MAC
@@ -371,27 +362,3 @@ class BSBLANFlowHandler(ConfigFlow, domain=DOMAIN):
CONF_PORT: self.port,
}
)
async def _discover_circuits(self) -> None:
"""Discover available heating circuits."""
config = BSBLANConfig(
host=self.host,
passkey=self.passkey,
port=self.port,
username=self.username,
password=self.password,
)
session = async_get_clientsession(self.hass)
bsblan = BSBLAN(config=config, session=session)
try:
await bsblan.initialize()
self.circuits = await bsblan.get_available_circuits()
except (
BSBLANError,
TimeoutError,
):
LOGGER.debug(
"Circuit discovery not available for %s, defaulting to single circuit",
self.host,
)
self.circuits = [1]
-1
View File
@@ -22,6 +22,5 @@ ATTR_INSIDE_TEMPERATURE: Final = "inside_temperature"
ATTR_OUTSIDE_TEMPERATURE: Final = "outside_temperature"
CONF_PASSKEY: Final = "passkey"
CONF_HEATING_CIRCUITS: Final = "heating_circuits"
DEFAULT_PORT: Final = 80

Some files were not shown because too many files have changed in this diff Show More