Compare commits

...

1 Commits

Author SHA1 Message Date
Paulus Schoutsen
6170b47332 Add gen dashboard prototype 2026-02-27 10:05:46 -05:00
5 changed files with 869 additions and 1 deletions

View File

@@ -0,0 +1,165 @@
# Dashboard Creation Guide
This guide provides best practices for building effective Home Assistant dashboards.
## Basic Structure of a Dashboard
A dashboard is a collection of views, and each view contains sections with cards. The basic structure looks like this:
```yaml
views:
- title: Living Room
path: living-room
icon: mdi:sofa
badges:
- type: entity
entity: sensor.living_room_temperature
- type: entity
entity: sensor.living_room_humidity
sections:
- type: grid
title: Lights
cards:
- type: tile
entity: light.living_room_ceiling
features:
- type: light-brightness
- type: tile
entity: light.floor_lamp
- type: tile
entity: light.reading_lamp
- type: grid
title: Climate
cards:
- type: thermostat
entity: climate.living_room
- type: tile
entity: sensor.living_room_temperature
- type: tile
entity: sensor.living_room_humidity
```
## Registry Listing Strategy
Use the list tools first to discover available data before building cards:
- `area_list`: list areas and filter with `area-id` and `floor`
- `device_list`: list devices and filter with `device-id`, `area`, and `floor`
- `entity_list`: list entities and filter with `entity-id`, `domain`, `area`, `floor`, `label`, `device`, and `device-class`
When needed, use `count`, `brief`, and `limit` flags to narrow output and then run a second call with the exact IDs you want to include in the dashboard.
## Task-Focused Dashboards
When creating a dashboard focused on a specific task that involves a few devices (e.g., "Home Office", "Coffee Station", "Media Center"), include a **Maintenance section** alongside the primary controls. This section should contain:
- Battery levels for wireless devices
- Signal strength indicators
- Firmware update status
- Device connectivity states
- Any diagnostic entities relevant to the devices
This approach keeps users informed about the health of the devices supporting their task without cluttering the main interface. When something stops working, the maintenance section provides immediate visibility into potential issues.
## Respect Entity Categories
Entities have categories that indicate their intended purpose:
- **No category (primary)**: Main controls and states meant for regular user interaction
- **Diagnostic**: Entities for maintenance and troubleshooting (e.g., signal strength, battery level, firmware version)
- **Config**: Configuration entities for device settings (e.g., sensitivity levels, LED brightness)
When building dashboards:
- Group primary entities together for the main user interface
- Place diagnostic entities in a separate "Maintenance" or "Diagnostics" section
- Config entities typically belong in a dedicated settings area, not the main dashboard
This separation keeps dashboards clean and prevents users from accidentally changing configuration settings.
## Tile Card Features for Enhanced Control
Tile cards support features that provide additional control directly on the card. Consider using tile card features for:
- **Primary controls**: Light brightness slider, cover position, fan speed
- **Frequently used actions**: Toggle switches, quick actions
Avoid adding features to:
- Diagnostic entities
- Configuration entities
- Entities where simple state display is sufficient
Tile card features make important controls more accessible and visually prominent.
```yaml
type: tile
entity: light.ceiling_lights
features:
- type: light-brightness
```
Available features: `cover-open-close`, `cover-position`, `cover-tilt`, `cover-tilt-position`, `light-brightness`, `light-color-temp`, `lock-commands`, `lock-open-door`, `media-player-playback`, `media-player-volume-slider`, `media-player-volume-buttons`, `fan-direction`, `fan-oscillate`, `fan-preset-modes`, `fan-speed`, `alarm-modes`, `climate-fan-modes`, `climate-swing-modes`, `climate-swing-horizontal-modes`, `climate-hvac-modes`, `climate-preset-modes`, `counter-actions`, `date-set`, `select-options`, `numeric-input`, `target-humidity`, `target-temperature`, `toggle`, `water-heater-operation-modes`, `humidifier-modes`, `humidifier-toggle`, `vacuum-commands`, `valve-open-close`, `valve-position`, `lawn-mower-commands`, `update-actions`, `trend-graph`, `area-controls`, `bar-gauge`,
## Specialized Cards for Specific Domains
### Climate Entities
Use the **thermostat card** for climate entities. It provides:
- Current and target temperature display
- HVAC mode selection
- Temperature adjustment controls
- A visual representation that users intuitively understand
```yaml
type: thermostat
entity: climate.heatpump
```
### Camera and Image Entities
Use **picture-entity cards** for camera and image entities:
- Hide the state (the image itself is the state)
- Hide the name unless the image context is ambiguous (most cameras and images are self-explanatory when viewed)
- Let the visual content speak for itself
```yaml
type: picture-entity
entity: camera.demo_camera
show_state: false
show_name: false
camera_view: auto
fit_mode: cover
```
### Graph Cards
Sometimes you want to show historical data for an entity. The choice of graph card depends on the type of entity:
#### Statistics Graph (for sensor entities)
Use **statistics-graph** cards when displaying sensor data over time:
- Automatically calculates and displays statistics (mean, min, max)
- Optimized for numerical sensor data
- Better performance for long time ranges
#### History Graph (for other entity types)
Use **history-graph** cards for:
- Climate entity history (showing temperature changes alongside HVAC states)
- Binary sensor timelines
- State-based entities where you want to see state changes over time
- Any non-sensor entity where historical data is valuable
The history graph shows actual state changes as they occurred, which is more appropriate for non-numerical entities.
## Using Badges for Global Information
Badges are ideal for displaying global data points that apply to an entire dashboard view. Good candidates include:
- Area temperature and humidity
- Security system status
- Weather conditions
- Presence/occupancy indicators
- General alerts or warnings
If the information is more specific to a subset of the dashboard, consider adding it to a section header instead of a badge. Badges work best for truly dashboard-wide context.
```yaml
type: entity
entity: sensor.temperature
```

View File

@@ -196,6 +196,9 @@ async def async_setup(hass: HomeAssistant, config: ConfigType) -> bool:
websocket_api.async_register_command(
hass, websocket.websocket_lovelace_delete_config
)
websocket_api.async_register_command(
hass, websocket.websocket_lovelace_generate_dashboard
)
yaml_dashboards = config[DOMAIN].get(CONF_DASHBOARDS, {})

View File

@@ -0,0 +1,379 @@
"""LLM tools for generating Lovelace dashboards."""
from __future__ import annotations
from pathlib import Path
from typing import Any, cast
import voluptuous as vol
from homeassistant.core import HomeAssistant
from homeassistant.helpers import (
area_registry as ar,
device_registry as dr,
entity_registry as er,
llm,
)
from homeassistant.util.json import JsonObjectType
API_ID = "lovelace_dashboard_generation"
API_NAME = "Lovelace Dashboard Generation"
API_PROMPT = """Use the list tools to discover available areas, devices and entities.
Always reference real entity_ids from tool results when building dashboard cards.
Return dashboard data that includes a top-level `views` array."""
GENERATE_GUIDELINES = Path(__file__).parent / "GUIDE.md"
_AREA_LIST_PARAMETERS = vol.Schema(
{
vol.Optional("area_id"): str,
vol.Optional("area-id"): str,
vol.Optional("floor"): str,
vol.Optional("count", default=False): bool,
vol.Optional("brief", default=False): bool,
vol.Optional("limit", default=0): vol.All(vol.Coerce(int), vol.Range(min=0)),
}
)
_DEVICE_LIST_PARAMETERS = vol.Schema(
{
vol.Optional("device_id"): str,
vol.Optional("device-id"): str,
vol.Optional("area"): str,
vol.Optional("floor"): str,
vol.Optional("count", default=False): bool,
vol.Optional("brief", default=False): bool,
vol.Optional("limit", default=0): vol.All(vol.Coerce(int), vol.Range(min=0)),
}
)
_ENTITY_LIST_PARAMETERS = vol.Schema(
{
vol.Optional("entity_id"): str,
vol.Optional("entity-id"): str,
vol.Optional("domain"): str,
vol.Optional("area"): str,
vol.Optional("floor"): str,
vol.Optional("label"): str,
vol.Optional("device"): str,
vol.Optional("device_class"): str,
vol.Optional("device-class"): str,
vol.Optional("count", default=False): bool,
vol.Optional("brief", default=False): bool,
vol.Optional("limit", default=0): vol.All(vol.Coerce(int), vol.Range(min=0)),
}
)
def _tool_str(data: dict[str, Any], *keys: str) -> str | None:
"""Extract a string value from alternate parameter names."""
for key in keys:
value = data.get(key)
if isinstance(value, str):
return value
return None
def _entity_device_class(
reg_entry: er.RegistryEntry | None, attributes: dict[str, Any]
) -> str:
"""Resolve device class with the same precedence as hab entity list."""
if reg_entry and reg_entry.original_device_class:
return reg_entry.original_device_class
if reg_entry and reg_entry.device_class:
return reg_entry.device_class
device_class = attributes.get("device_class")
if isinstance(device_class, str):
return device_class
return ""
def _apply_limit(items: list[dict[str, Any]], limit: int) -> list[dict[str, Any]]:
"""Apply list limit the same way as hab list commands."""
if limit > 0 and len(items) > limit:
return items[:limit]
return items
async def build_generation_instructions(hass: HomeAssistant, prompt: str) -> str:
"""Build instructions used for Lovelace dashboard generation."""
guide = await hass.async_add_executor_job(GENERATE_GUIDELINES.read_text)
return (
"Generate a Home Assistant Lovelace dashboard configuration.\n"
"Return only valid JSON (no markdown and no explanation).\n"
"Return a complete dashboard object with a top-level `views` array.\n"
"Each view should include useful cards for the user request.\n"
"Use the list tools to discover real area, device and entity IDs.\n"
"Use real entity IDs discovered from available tools.\n"
"Prioritize readable, practical dashboards over decorative layouts.\n\n"
f"User request:\n{prompt.strip()}\n\n"
f"{guide}"
)
class AreaListTool(llm.Tool):
"""Tool mirroring `hab area list`."""
name = "area_list"
description = (
"List areas with hab-compatible filters: area-id, floor, count, brief, limit."
)
parameters = _AREA_LIST_PARAMETERS
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize the tool."""
self._hass = hass
async def async_call(
self,
hass: HomeAssistant,
tool_input: llm.ToolInput,
llm_context: llm.LLMContext,
) -> JsonObjectType:
"""List areas with hab-compatible output fields."""
del hass, llm_context
data = cast(dict[str, Any], self.parameters(tool_input.tool_args))
area_id_filter = _tool_str(data, "area_id", "area-id")
floor_filter = _tool_str(data, "floor")
count = cast(bool, data["count"])
brief = cast(bool, data["brief"])
limit = cast(int, data["limit"])
area_registry = ar.async_get(self._hass)
result: list[dict[str, Any]] = []
for area in area_registry.areas.values():
if area_id_filter and area.id != area_id_filter:
continue
if floor_filter and area.floor_id != floor_filter:
continue
result.append(
{
"area_id": area.id,
"name": area.name,
"floor_id": area.floor_id,
"icon": area.icon,
"labels": sorted(area.labels),
}
)
if count:
return {"count": len(result)}
result = _apply_limit(result, limit)
if brief:
return {
"areas": [
{"area_id": area["area_id"], "name": area["name"]}
for area in result
]
}
return {"areas": result}
class DeviceListTool(llm.Tool):
"""Tool mirroring `hab device list`."""
name = "device_list"
description = (
"List devices with hab-compatible filters: device-id, area, floor, count,"
" brief, limit."
)
parameters = _DEVICE_LIST_PARAMETERS
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize the tool."""
self._hass = hass
async def async_call(
self,
hass: HomeAssistant,
tool_input: llm.ToolInput,
llm_context: llm.LLMContext,
) -> JsonObjectType:
"""List devices with hab-compatible output fields."""
del hass, llm_context
data = cast(dict[str, Any], self.parameters(tool_input.tool_args))
device_id_filter = _tool_str(data, "device_id", "device-id")
area_filter = _tool_str(data, "area")
floor_filter = _tool_str(data, "floor")
count = cast(bool, data["count"])
brief = cast(bool, data["brief"])
limit = cast(int, data["limit"])
area_floor_map: dict[str, str] = {}
if floor_filter:
area_registry = ar.async_get(self._hass)
area_floor_map = {
area.id: area.floor_id or ""
for area in area_registry.areas.values()
if area.id
}
device_registry = dr.async_get(self._hass)
result: list[dict[str, Any]] = []
for device in device_registry.devices.values():
if device_id_filter and device.id != device_id_filter:
continue
if area_filter and device.area_id != area_filter:
continue
if floor_filter:
if not device.area_id:
continue
if area_floor_map.get(device.area_id) != floor_filter:
continue
result.append(
{
"id": device.id,
"name": device.name,
"manufacturer": device.manufacturer,
"model": device.model,
"area_id": device.area_id,
}
)
if count:
return {"count": len(result)}
result = _apply_limit(result, limit)
if brief:
return {
"devices": [{"id": item["id"], "name": item["name"]} for item in result]
}
return {"devices": result}
class EntityListTool(llm.Tool):
"""Tool mirroring `hab entity list`."""
name = "entity_list"
description = (
"List entities with hab-compatible filters: entity-id, domain, area, floor,"
" label, device, device-class, count, brief, limit."
)
parameters = _ENTITY_LIST_PARAMETERS
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize the tool."""
self._hass = hass
async def async_call(
self,
hass: HomeAssistant,
tool_input: llm.ToolInput,
llm_context: llm.LLMContext,
) -> JsonObjectType:
"""List entities with hab-compatible output fields."""
del hass, llm_context
data = cast(dict[str, Any], self.parameters(tool_input.tool_args))
entity_id_filter = _tool_str(data, "entity_id", "entity-id")
domain_filter = _tool_str(data, "domain")
area_filter = _tool_str(data, "area")
floor_filter = _tool_str(data, "floor")
label_filter = _tool_str(data, "label")
device_filter = _tool_str(data, "device")
device_class_filter = _tool_str(data, "device_class", "device-class")
count = cast(bool, data["count"])
brief = cast(bool, data["brief"])
limit = cast(int, data["limit"])
area_floor_map: dict[str, str] = {}
if floor_filter:
area_registry = ar.async_get(self._hass)
area_floor_map = {
area.id: area.floor_id or ""
for area in area_registry.areas.values()
if area.id
}
entity_registry = er.async_get(self._hass)
result: list[dict[str, Any]] = []
for state in self._hass.states.async_all():
entity_id = state.entity_id
if entity_id_filter and entity_id != entity_id_filter:
continue
if domain_filter and state.domain != domain_filter:
continue
reg_entry = entity_registry.async_get(entity_id)
if device_filter:
if reg_entry is None or reg_entry.device_id != device_filter:
continue
if area_filter:
if reg_entry is None or reg_entry.area_id != area_filter:
continue
if floor_filter:
if reg_entry is None or not reg_entry.area_id:
continue
if area_floor_map.get(reg_entry.area_id) != floor_filter:
continue
if label_filter:
if reg_entry is None or label_filter not in reg_entry.labels:
continue
friendly_name = state.attributes.get("friendly_name")
if not isinstance(friendly_name, str):
friendly_name = ""
device_class = _entity_device_class(reg_entry, state.attributes)
if device_class_filter and device_class != device_class_filter:
continue
result.append(
{
"entity_id": entity_id,
"state": state.state,
"name": friendly_name,
"area_id": reg_entry.area_id if reg_entry else "",
"device_id": reg_entry.device_id if reg_entry else "",
"device_class": device_class,
"labels": sorted(reg_entry.labels) if reg_entry else [],
"disabled": reg_entry.disabled_by is not None
if reg_entry
else False,
}
)
if count:
return {"count": len(result)}
result = _apply_limit(result, limit)
if brief:
return {
"entities": [
{"entity_id": item["entity_id"], "name": item["name"]}
for item in result
]
}
return {"entities": result}
class LovelaceDashboardGenerationAPI(llm.API):
"""LLM API for Lovelace dashboard generation."""
def __init__(self, hass: HomeAssistant) -> None:
"""Initialize the API."""
super().__init__(hass=hass, id=API_ID, name=API_NAME)
async def async_get_api_instance(
self, llm_context: llm.LLMContext
) -> llm.APIInstance:
"""Return the API instance."""
return llm.APIInstance(
api=self,
api_prompt=API_PROMPT,
llm_context=llm_context,
tools=[
AreaListTool(self.hass),
DeviceListTool(self.hass),
EntityListTool(self.hass),
],
)

View File

@@ -8,11 +8,12 @@ from typing import TYPE_CHECKING, Any
import voluptuous as vol
from homeassistant.components import websocket_api
from homeassistant.components import ai_task, websocket_api
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.json import json_fragment
from homeassistant.util.json import json_loads
from .const import (
CONF_RESOURCE_MODE,
@@ -22,6 +23,7 @@ from .const import (
ConfigNotFound,
)
from .dashboard import LovelaceConfig
from .llm import LovelaceDashboardGenerationAPI, build_generation_instructions
if TYPE_CHECKING:
from .resources import ResourceStorageCollection
@@ -184,3 +186,93 @@ async def websocket_lovelace_delete_config(
) -> None:
"""Delete Lovelace UI configuration."""
await config.async_delete()
def _coerce_generated_dashboard(data: Any) -> dict[str, Any]:
"""Coerce AI output into a dashboard config object."""
if isinstance(data, dict):
return data
if not isinstance(data, str):
raise HomeAssistantError("Generated dashboard must be a valid JSON object")
candidates = [data.strip()]
if "```" in data:
for block in data.split("```"):
candidate = block.strip()
if not candidate:
continue
if candidate.casefold().startswith("json"):
candidate = candidate[4:].strip()
candidates.append(candidate)
for candidate in candidates:
try:
parsed = json_loads(candidate)
except ValueError:
continue
if isinstance(parsed, dict):
return parsed
raise HomeAssistantError("Generated dashboard must be a valid JSON object")
def _validate_generated_dashboard(data: Any) -> dict[str, Any]:
"""Validate generated dashboard response."""
if not isinstance(data, dict):
raise HomeAssistantError("Generated dashboard must be an object")
views = data.get("views")
if not isinstance(views, list) or not views:
raise HomeAssistantError(
"Generated dashboard must include at least one view in `views`"
)
if not all(isinstance(view, dict) for view in views):
raise HomeAssistantError("Each dashboard view must be an object")
return data
@websocket_api.require_admin
@websocket_api.websocket_command(
{
"type": "lovelace/config/generate",
vol.Required("prompt"): cv.string,
}
)
@websocket_api.async_response
async def websocket_lovelace_generate_dashboard(
hass: HomeAssistant,
connection: websocket_api.ActiveConnection,
msg: dict[str, Any],
) -> None:
"""Generate a Lovelace dashboard configuration from a prompt."""
if ai_task.DOMAIN not in hass.config.components:
connection.send_error(
msg["id"],
"error",
"AI Task integration is not available. Configure AI Task first.",
)
return
try:
result = await ai_task.async_generate_data(
hass,
task_name="lovelace_dashboard_generation",
instructions=await build_generation_instructions(hass, msg["prompt"]),
llm_api=LovelaceDashboardGenerationAPI(hass),
)
config = _validate_generated_dashboard(_coerce_generated_dashboard(result.data))
except HomeAssistantError as err:
connection.send_error(msg["id"], "error", str(err))
return
connection.send_result(
msg["id"],
{
"conversation_id": result.conversation_id,
"config": config,
},
)

View File

@@ -7,13 +7,34 @@ from unittest.mock import MagicMock, patch
import pytest
import voluptuous as vol
from homeassistant.components import ai_task
from homeassistant.components.lovelace import _validate_url_slug
from homeassistant.components.lovelace.llm import LovelaceDashboardGenerationAPI
from homeassistant.core import HomeAssistant
from homeassistant.helpers import (
area_registry as ar,
device_registry as dr,
entity_registry as er,
floor_registry as fr,
llm,
)
from homeassistant.setup import async_setup_component
from tests.common import MockConfigEntry
from tests.typing import WebSocketGenerator
def _llm_context() -> llm.LLMContext:
"""Create an LLM context for tests."""
return llm.LLMContext(
platform="test",
context=None,
language="en",
assistant=None,
device_id=None,
)
@pytest.fixture
def mock_onboarding_not_done() -> Generator[MagicMock]:
"""Mock that Home Assistant is currently onboarding."""
@@ -100,6 +121,214 @@ async def test_create_dashboards_when_not_onboarded(
assert response["result"] == {"strategy": {"type": "map"}}
async def test_generate_dashboard_with_ai(
hass: HomeAssistant,
hass_ws_client: WebSocketGenerator,
mock_onboarding_done,
) -> None:
"""Test generating a dashboard with AI over websocket."""
hass.config.components.add(ai_task.DOMAIN)
assert await async_setup_component(hass, "lovelace", {})
client = await hass_ws_client(hass)
generated_config = {"views": [{"title": "Home", "cards": []}]}
with patch(
"homeassistant.components.lovelace.websocket.ai_task.async_generate_data",
return_value=ai_task.GenDataTaskResult(
conversation_id="conversation-1",
data=generated_config,
),
) as mock_generate:
await client.send_json_auto_id(
{"type": "lovelace/config/generate", "prompt": "Create a lights dashboard"}
)
response = await client.receive_json()
assert response["success"]
assert response["result"] == {
"conversation_id": "conversation-1",
"config": generated_config,
}
kwargs = mock_generate.call_args.kwargs
assert kwargs["task_name"] == "lovelace_dashboard_generation"
assert "Create a lights dashboard" in kwargs["instructions"]
assert isinstance(kwargs["llm_api"], LovelaceDashboardGenerationAPI)
async def test_generate_dashboard_with_ai_invalid_response(
hass: HomeAssistant,
hass_ws_client: WebSocketGenerator,
mock_onboarding_done,
) -> None:
"""Test generating a dashboard fails when model returns invalid config."""
hass.config.components.add(ai_task.DOMAIN)
assert await async_setup_component(hass, "lovelace", {})
client = await hass_ws_client(hass)
with patch(
"homeassistant.components.lovelace.websocket.ai_task.async_generate_data",
return_value=ai_task.GenDataTaskResult(
conversation_id="conversation-1",
data={"title": "Broken"},
),
):
await client.send_json_auto_id(
{"type": "lovelace/config/generate", "prompt": "Broken dashboard"}
)
response = await client.receive_json()
assert not response["success"]
assert response["error"]["code"] == "error"
assert "at least one view" in response["error"]["message"]
async def test_generate_dashboard_with_ai_json_markdown(
hass: HomeAssistant,
hass_ws_client: WebSocketGenerator,
mock_onboarding_done,
) -> None:
"""Test generating a dashboard when model returns JSON in markdown."""
hass.config.components.add(ai_task.DOMAIN)
assert await async_setup_component(hass, "lovelace", {})
client = await hass_ws_client(hass)
with patch(
"homeassistant.components.lovelace.websocket.ai_task.async_generate_data",
return_value=ai_task.GenDataTaskResult(
conversation_id="conversation-1",
data='```json\n{"views":[{"title":"Home"}]}\n```',
),
):
await client.send_json_auto_id(
{"type": "lovelace/config/generate", "prompt": "Markdown dashboard"}
)
response = await client.receive_json()
assert response["success"]
assert response["result"]["config"] == {"views": [{"title": "Home"}]}
async def test_lovelace_generation_list_tools_match_hab(
hass: HomeAssistant,
area_registry: ar.AreaRegistry,
device_registry: dr.DeviceRegistry,
entity_registry: er.EntityRegistry,
floor_registry: fr.FloorRegistry,
) -> None:
"""Test list tool behavior matches Home Assistant Builder list commands."""
config_entry = MockConfigEntry(domain="test")
config_entry.add_to_hass(hass)
floor_1 = floor_registry.async_create("First floor")
floor_2 = floor_registry.async_create("Second floor")
kitchen = area_registry.async_create("Kitchen", floor_id=floor_1.floor_id)
bedroom = area_registry.async_create("Bedroom", floor_id=floor_2.floor_id)
device_kitchen = device_registry.async_get_or_create(
config_entry_id=config_entry.entry_id,
identifiers={("test", "kitchen_device")},
name="Kitchen Device",
manufacturer="Acme",
model="M1",
)
device_registry.async_update_device(device_kitchen.id, area_id=kitchen.id)
device_bedroom = device_registry.async_get_or_create(
config_entry_id=config_entry.entry_id,
identifiers={("test", "bedroom_device")},
name="Bedroom Device",
manufacturer="Acme",
model="M2",
)
device_registry.async_update_device(device_bedroom.id, area_id=bedroom.id)
kitchen_entry = entity_registry.async_get_or_create(
"sensor",
"test",
"kitchen_temperature",
device_id=device_kitchen.id,
original_device_class="temperature",
)
entity_registry.async_update_entity(
kitchen_entry.entity_id, area_id=kitchen.id, labels={"important"}
)
bedroom_entry = entity_registry.async_get_or_create(
"binary_sensor",
"test",
"bedroom_motion",
device_id=device_bedroom.id,
original_device_class="motion",
)
entity_registry.async_update_entity(bedroom_entry.entity_id, area_id=bedroom.id)
hass.states.async_set(
kitchen_entry.entity_id, "21", {"friendly_name": "Kitchen Temp"}
)
hass.states.async_set(
bedroom_entry.entity_id, "off", {"friendly_name": "Bedroom Motion"}
)
api = LovelaceDashboardGenerationAPI(hass)
api_instance = await api.async_get_api_instance(_llm_context())
tools = {tool.name: tool for tool in api_instance.tools}
assert sorted(tools) == ["area_list", "device_list", "entity_list"]
area_result = await tools["area_list"].async_call(
hass,
llm.ToolInput(
tool_name="area_list",
tool_args={"floor": floor_1.floor_id, "brief": True},
),
_llm_context(),
)
assert area_result == {"areas": [{"area_id": kitchen.id, "name": "Kitchen"}]}
device_result = await tools["device_list"].async_call(
hass,
llm.ToolInput(
tool_name="device_list",
tool_args={"area": kitchen.id, "brief": True},
),
_llm_context(),
)
assert device_result == {
"devices": [{"id": device_kitchen.id, "name": "Kitchen Device"}]
}
entity_result = await tools["entity_list"].async_call(
hass,
llm.ToolInput(
tool_name="entity_list",
tool_args={
"domain": "sensor",
"device-class": "temperature",
"label": "important",
"brief": True,
},
),
_llm_context(),
)
assert entity_result == {
"entities": [{"entity_id": kitchen_entry.entity_id, "name": "Kitchen Temp"}]
}
entity_count = await tools["entity_list"].async_call(
hass,
llm.ToolInput(
tool_name="entity_list",
tool_args={"device": device_kitchen.id, "count": True},
),
_llm_context(),
)
assert entity_count == {"count": 1}
@pytest.mark.parametrize(
("value", "expected"),
[