Compare commits

...

107 Commits

Author SHA1 Message Date
3b9af23290 Merge branch 'component_bt/optimize_spp_stop_server_v4.3' into 'release/v4.3'
component_bt/Optimize SPP Stop Server API[backport v4.3]

See merge request espressif/esp-idf!12615
2021-03-12 07:45:05 +00:00
ff12b50e45 Merge branch 'feature/wifi_support_sta_sleep_at_disconnected_v4.3' into 'release/v4.3'
esp_wifi: wifi support sta sleep at disconnected (backport v4.3)

See merge request espressif/esp-idf!12720
2021-03-12 07:43:46 +00:00
f6336516d1 Merge branch 'bugfix/fix_ble_connect_evt_report_remote_addr_err_v4.3' into 'release/v4.3'
fix ble connection event report remote address error (backport v4.3)

See merge request espressif/esp-idf!12717
2021-03-12 05:59:25 +00:00
4d80dd1238 Merge branch 'bugfix/c++_usage_esp_core_dump_h_v4.3' into 'release/v4.3'
Fixed c++ include usage for esp_core_dump.h (backport v4.3)

See merge request espressif/esp-idf!12677
2021-03-12 04:01:42 +00:00
zwj
8a2155f95e fix ble connection event report remote address error 2021-03-12 11:44:13 +08:00
367190deaf esp_wifi: support sta to sleep at disconnected status 2021-03-12 00:22:38 +08:00
dbb632fe34 Merge branch 'bugfix/fix_smartconfig_issue_v4.3' into 'release/v4.3'
esp_wifi: Fix the second distribution network failure of smartconfig (backport v4.3)

See merge request espressif/esp-idf!12685
2021-03-11 14:28:52 +00:00
1329747dc1 Merge branch 'bugfix/flash_s3_v4.3' into 'release/v4.3'
spi_flash: remove useless dummy and make rom compatible on esp32s3(backport v4.3)

See merge request espressif/esp-idf!12704
2021-03-11 13:38:52 +00:00
f9e1942252 Merge branch 'bugfix/ota_v4.3' into 'release/v4.3'
ota: fix ota with flash encryption(backport v4.3)

See merge request espressif/esp-idf!12705
2021-03-11 13:21:00 +00:00
82ffb33085 Merge branch 'feature/crypto_reserve_gdma_ch_v4.3' into 'release/v4.3'
aes/sha: use a shared lazy allocated GDMA channel for AES and SHA (v4.3)

See merge request espressif/esp-idf!12676
2021-03-11 10:50:09 +00:00
9fd12182de Merge branch 'docs/esp32c3_fatal_errors_v4.3' into 'release/v4.3'
doc: update "Fatal Errors" chapter for ESP32C3 board (backport v4.3)

See merge request espressif/esp-idf!12678
2021-03-11 10:48:03 +00:00
d42958439d Merge branch 'doc/c3_api_ref_update_v4.3' into 'release/v4.3'
docs: update api-reference chapters for C3 (v4.3)

See merge request espressif/esp-idf!12583
2021-03-11 09:36:44 +00:00
37946ab300 deep sleep: power down wifi and bt during deep sleep 2021-03-11 07:32:30 +00:00
2b7a3f6d85 light sleep: some default parameters optimization 2021-03-11 07:32:30 +00:00
60642e580c esp_wifi: Fix the second distribution network failure of ESPTouch v2 2021-03-11 07:32:30 +00:00
4235e80008 Merge branch 'bugfix/backport_bugfixs_to_release_v4.3' into 'release/v4.3'
Bugfix/backport bugfixs to release v4.3

See merge request espressif/esp-idf!12682
2021-03-11 07:15:00 +00:00
a8e0989648 Merge branch 'feature/re-enable_suspend_test_esp32c3_v4.3' into 'release/v4.3'
freertos: Workaround delay between interrupt request and trigger on RISC-V (backport v4.3)

See merge request espressif/esp-idf!12679
2021-03-11 07:12:56 +00:00
28f50addda ota: fix ota with flash encryption 2021-03-11 14:42:09 +08:00
9393a402eb doc: update "Fatal Errors" chapter for ESP32C3 board 2021-03-11 06:37:30 +00:00
268787c5fb spi_flash: remove useless dummy and make rom compatible on esp32s3 2021-03-11 14:31:27 +08:00
233f3f80e5 Merge branch 'feature/skip_validate_v4.3' into 'release/v4.3'
bootloader: Add config options to skip validation of app for minimum boot time (v4.3)

See merge request espressif/esp-idf!12683
2021-03-11 05:47:43 +00:00
d01efe4b8c Fix for C2H flow control param check when only BLE mode is configured. 2021-03-11 12:04:01 +08:00
e48935d187 Merge branch 'bugfix/uart_baud_c3_s3_v4.3' into 'release/v4.3'
uart: fixed incorrect baudrate on C3 and S3 when target is too slow (v4.3)

See merge request espressif/esp-idf!12681
2021-03-10 17:10:13 +00:00
12bbd1f051 Merge branch 'bugfix/fix_some_wifi_bugs_0309_v4.3' into 'release/v4.3'
esp_wifi: Fix some Wi-Fi bugs 0309 (backport v4.3)

See merge request espressif/esp-idf!12670
2021-03-10 11:07:20 +00:00
0305d13467 bootloader: Add config options to skip validation of app for minimum boot time 2021-03-10 19:08:47 +11:00
e6ace495b4 Fix issues during light sleep and DFS 2021-03-10 14:14:49 +08:00
b449909b35 Fix controller task watchdog in Wi-Fi test 2021-03-10 14:14:29 +08:00
0253d825e9 Fix IRAM_ATTR missing 2021-03-10 14:14:04 +08:00
32b0836485 driver: esp32s3 fix UART driver
Fix set UART2 instance to correct base address (esp32s3 has non standard base periph address)
2021-03-10 13:41:10 +08:00
7dca6b7428 uart: fixed incorrect baudrate on C3 and S3 when target is too slow
The integer part of the divider is only 12-bit now. We used prescaler to get low frequency instead.
2021-03-10 13:41:10 +08:00
b180c2a146 Merge branch 'bugfix/touch_element_callback_para_v4.3' into 'release/v4.3'
touch_element: fix event callback parameter type, change it into pointer (backport v4.3)

See merge request espressif/esp-idf!12629
2021-03-10 05:26:50 +00:00
774f010196 freertos: Fix delay between interrupt request and trigger on RISC-V
NOP instructions have been added in order to prevent the code
from executing code it shouldn't execute. This is due to a delay
between the moment an interrupt is requested and the moment it
is fired. It only happens on RISC-V SoC.
2021-03-10 12:14:21 +08:00
22a8fe5b6f Removed esp_core_dump.h from check_public_headers_exceptions.txt (as per @igrr 's request 2021-03-10 12:04:48 +08:00
d36c72fba0 Fixed c++ include usage for esp_core_dump.h 2021-03-10 12:04:42 +08:00
1c8fd4041e aes/sha: use a shared lazy allocated GDMA channel for AES and SHA
Removed the old dynamically allocated GDMA channel approach.
It proved too unreliable as we couldn't not ensure consumers of the mbedtls
would properly free the channels after use.

Replaced by a single shared GDMA channel for AES and SHA, which won't be
released unless user specifically calls API for releasing it.
2021-03-10 09:40:35 +08:00
d4263c2558 Merge branch 'bugfix/btdm_crash_when_esp_restart_v4.3' into 'release/v4.3'
components/bt: Fix crash in Bluetooth when esp_restart

See merge request espressif/esp-idf!12641
2021-03-09 17:30:45 +00:00
ea49545269 esp_wifi: Fix some Wi-Fi bugs 0309
1. Fix the issue that the parameters obtained from RAM cannot be saved to NVS
2. Modify not to store the default value in NVS
3. Fixed issue with hidden AP scans after connecting AP.
4. Fix watchdog issue when receiving action frame.
5. Fixed issue of reason code change from 15 to 204 when provide wrong password
6. Fix set config return value error
7. Fix ampdu age timer memory leak
2021-03-09 20:30:13 +08:00
9ca05c17ae Merge branch 'bugfix/ota_simple_backport_v4.3' into 'release/v4.3'
ota: fix ota with flash encryption(backport v4.3)

See merge request espressif/esp-idf!12639
2021-03-09 09:42:57 +00:00
c5f8fbea02 Merge branch 'fix/esp_tls_prevent_freeing_global_CA_store_after_each_request_v4.3' into 'release/v4.3'
fix(esp_tls): prevent freeing global CA store after each request (v4.3)

See merge request espressif/esp-idf!12630
2021-03-08 04:59:19 +00:00
cf7891cb93 Merge branch 'feature/touch_element_api_reference_v4.3' into 'release/v4.3'
touch_element: add touch element lib api-reference doc (backport v4.3)

See merge request espressif/esp-idf!12572
2021-03-08 02:58:43 +00:00
1e77586120 components/bt: Fix crash in Bluetooth when esp_restart 2021-03-05 20:19:18 +08:00
1ea548ecb3 ota: fix ota with flash encryption 2021-03-05 18:39:32 +08:00
962ea61d53 protocomm: Fixed NULL check of allocated memory
Fixes one part of - https://github.com/espressif/esp-idf/issues/6440
2021-03-05 10:04:45 +05:30
d61ee580d5 esp_tls: Fix misplaced paranthesis in esp_tls_mbedtls.c
Fixes one part of -  https://github.com/espressif/esp-idf/issues/6440
2021-03-05 10:04:45 +05:30
947e445e02 Fix esp_tls: Prevent freeing of global ca store after each connection
when dynamic ssl buffers are enabled
2021-03-05 09:53:19 +05:30
1821fd766b touch_element: fix event callback parameter type, change it into pointer 2021-03-05 11:45:47 +08:00
d508182429 Merge branch 'feature/touch_element_example_v4.3' into 'release/v4.3'
touch_element: add touch element lib examples (backport v4.3)

See merge request espressif/esp-idf!12571
2021-03-05 03:29:43 +00:00
58c9a2eaba add API esp_spp_stop_srv_scn to stop a specific server 2021-03-04 15:16:44 +08:00
07b62da0d4 Merge branch 'feature/add_qrcode_in_provisioning_example_v4.3' into 'release/v4.3'
examples: Add QR code support for provisioning examples [backport v4.3]

See merge request espressif/esp-idf!12555
2021-03-03 06:24:56 +00:00
fb82bdb9da docs: update api-reference chapters for C3
Checked and updated the following chapters:
 * api-reference/network
 * api-reference/protocols
 * api-reference/provisioning
 * api-reference/storage
 * api-reference/peripherals/ds
 * api-reference/peripherals/hmac
 * api-reference/peripherals/secure_element
2021-03-02 15:00:56 +08:00
b639514793 touch_element: add touch element api-reference doc 2021-03-01 18:02:53 +08:00
f12b571f82 Merge branch 'bugfix/esp_wifi_deinit_v4.3' into 'release/v4.3'
esp_wifi_deinit: Return ESP_ERR_WIFI_NOT_STOPPED if wifi is not stopped (v4.3)

See merge request espressif/esp-idf!12539
2021-03-01 09:31:13 +00:00
c0f06115d4 touch_element: add touch element lib example 2021-03-01 17:28:04 +08:00
20b25a9667 esp_wifi_deinit: Return ESP_ERR_WIFI_NOT_STOPPED if wifi is not stopped
Add test case to test this workflow
2021-03-01 05:33:26 +00:00
1d9d444c07 Merge branch 'bugfix/deepsleep_disable_brownout_s2_v4.3' into 'release/v4.3'
deep_sleep: on S2 disable the brown out detector before deep sleeping (v4.3)

See merge request espressif/esp-idf!12499
2021-03-01 03:50:20 +00:00
060a829091 examples: Add QR code support for provisioning examples 2021-03-01 10:51:36 +08:00
78314df2a5 Merge branch 'feature/enable_c3_ut_v4.3' into 'release/v4.3'
CI: run C3 unit test on protected branches

See merge request espressif/esp-idf!12416
2021-03-01 02:10:51 +00:00
8ceb462993 Merge branch 'feature/enable_gpio19_esp32c3_v4.3' into 'release/v4.3'
gpio: enable GPIO19 on ESP32C3 boards (backport v4.3)

See merge request espressif/esp-idf!12542
2021-02-27 05:25:42 +00:00
6accffecea Merge branch 'bugfix/fix_spi_slv_hd_dma_reset_issue_4.3' into 'release/v4.3'
spi_slave_hd: fix spi slv hd dma reset issue (4.3)

See merge request espressif/esp-idf!12513
2021-02-27 04:12:42 +00:00
de79e482c9 Merge branch 'feaature/deep_sleep_wakeup_backport' into 'release/v4.3'
esp_system: support gpio wakeup from deep sleep on esp32c3(backport v4.3)

See merge request espressif/esp-idf!12537
2021-02-26 14:49:19 +00:00
d92b647199 Merge branch 'bugfix/fix_some_wifi_bugs_0226_v4.3' into 'release/v4.3'
esp_wifi: fix some wifi bugs (backport v4.3)

See merge request espressif/esp-idf!12538
2021-02-26 12:52:39 +00:00
965daf977a Merge branch 'bugfix/btdm_fix_spp_acceptor_cancle_pair_crash_4.3' into 'release/v4.3'
fix crash caused by spp pairing cancel (4.3)

See merge request espressif/esp-idf!12544
2021-02-26 12:42:21 +00:00
zwj
aa652adc12 fix crash caused by spp pairing cancel 2021-02-26 19:17:51 +08:00
5795075460 gpio: enable GPIO19 on ESP32C3 boards 2021-02-26 17:54:36 +08:00
198d350fe5 esp_system: support gpio wakeup from deep sleep on esp32c3 2021-02-26 17:08:22 +08:00
24f3341a2d Merge branch 'docs/spi_flash_auto_suspend_v4.3' into 'release/v4.3'
spi_flash: update docs after adding CONFIG_SPI_FLASH_AUTO_SUSPEND (v4.3)

See merge request espressif/esp-idf!12512
2021-02-26 08:53:48 +00:00
5a429f644f esp_wifi: update wifi lib 2021-02-26 16:36:16 +08:00
9aae8e0ce3 esp_wifi: synchronize Wi-Fi adapter between different chips
Support preferring to allocate Wi-Fi memory from PSRAM on ESP32-S3

Support Wi-Fi TX cache buffer on ESP32-S3
2021-02-26 16:34:23 +08:00
e5e47ebae6 esp_wifi: store PHY digital registers before disabling PHY and load
them after enabling PHY
2021-02-26 16:34:10 +08:00
436c3c289e esp_wifi: optimization wifi rate
1.support disable 11b rate
2.support config espnow rate
3.fix sta negotiate phymode issue
4.update ftm rate
2021-02-26 16:32:09 +08:00
2aa6aa8b88 change rom function for esp32c3 to fix eb lldesc size issue 2021-02-26 16:31:12 +08:00
3e9cd49d32 spi_slv_hd: add hal_trans_finish comments for clarifying risk 2021-02-26 10:39:12 +08:00
2c1845995b spi_slave_hd: refactor the hal append api to remove the spinlock 2021-02-26 10:39:10 +08:00
0c77299c34 Merge branch 'feature/update_efuse_doc_v4.3' into 'release/v4.3'
doc/efuse: Adds efuse doc for ESP32-C3 (v4.3)

See merge request espressif/esp-idf!12516
2021-02-25 23:22:51 +00:00
9ac3e7c5d1 Merge branch 'feature/doc_update_security_chapters_v4.3' into 'release/v4.3'
doc/secure features: Updates doc esp32c3 (v4.3)

See merge request espressif/esp-idf!12517
2021-02-25 23:14:12 +00:00
c42ce05941 Merge branch 'feature/apply_gdma_new_channel_api_to_adc_4.3' into 'release/v4.3'
adc: apply gdma new channel api to adc (v4.3)

See merge request espressif/esp-idf!12502
2021-02-25 17:37:17 +00:00
ea2eb9d833 doc(esp32c3): Updates secure features doc 2021-02-25 21:08:55 +08:00
dec52a93d4 doc/efuse: Adds efuse doc for ESP32-C3 2021-02-25 21:07:27 +08:00
41bee7831f adc: apply gdma api to adc on esp32c3 2021-02-25 18:53:32 +08:00
bd1b4dbda1 Merge branch 'feature/apply_gdma_new_channel_api_to_spi_4.3' into 'release/v4.3'
spi: apply gdma new channel api to spi (v4.3)

See merge request espressif/esp-idf!12501
2021-02-25 10:36:09 +00:00
1de12526eb spi_flash: update docs after adding CONFIG_SPI_FLASH_AUTO_SUSPEND 2021-02-25 18:08:23 +08:00
f0f2799946 Merge branch 'bugfix/gdma_pair_uninstall_concurrency_issue_v4.3' into 'release/v4.3'
gdma: fix wrong level of {group,pair} reference count (v4.3)

See merge request espressif/esp-idf!12488
2021-02-25 09:11:58 +00:00
5aabdd8abf Merge branch 'bugfix/eclipse_make_decode_v4.3' into 'release/v4.3'
tools: Fix Eclipse build: “UnicodeDecodeError: 'ascii' codec can't decode byte” (v4.3)

See merge request espressif/esp-idf!12357
2021-02-25 07:17:28 +00:00
8a2f91b48a spi: add enum for spi dma channels 2021-02-25 11:03:18 +08:00
ed6fb33726 spi: remove hard-coded DMA chan in soc_caps.h 2021-02-25 11:03:07 +08:00
66d10f0eec spi: refactor spi_common dma allocator 2021-02-25 11:01:33 +08:00
97f248d22c spi: update unit tests to spi gdma allocator 2021-02-25 11:01:27 +08:00
ffc4ff5a8c spi: apply gdma allocator to SPI 2021-02-25 11:01:16 +08:00
326d76ebdf spi: add dma channel auto-alloc feature on esp32 2021-02-25 11:01:06 +08:00
c6ed522d60 deep_sleep: on S2 disable the brown out detector before deep sleeping
On S2 the brown out detector would occasionally trigger erroneously during deep sleep.

Disable it before sleeping to circumvent this issue.

Closes https://github.com/espressif/esp-idf/issues/6179
2021-02-25 10:53:06 +08:00
8e187e7157 Merge branch 'bugfix/c3_unit_test_cleanup_v4.3' into 'release/v4.3'
System: C3 shared stack watchpoint & unit test cleanups (v4.3)

See merge request espressif/esp-idf!12418
2021-02-24 22:51:11 +00:00
19f18aaa11 gdma: fix wrong level of {group,pair} ref count 2021-02-24 17:46:23 +08:00
626a861115 async_mcp: clean eof flag when prepare rx descriptors 2021-02-24 17:46:23 +08:00
a0fdf4b06c Merge branch 'feature/riscv_get_tickrate_v4.3' into 'release/v4.3'
freertos: add API for getting tick rate on C3 (v4.3)

See merge request espressif/esp-idf!12471
2021-02-24 08:29:59 +00:00
3d9523724d freertos: add API for getting tick rate on C3 2021-02-23 12:14:11 +08:00
813b6c4ef6 Merge branch 'bugfix/fix_enable_reset_provision_cause_device_crash_v4.3' into 'release/v4.3'
provisioning: Fix enable CONFIG_EXAMPLE_RESET_PROVISIONED will cause device crash [backport v4.3]

See merge request espressif/esp-idf!12353
2021-02-22 06:13:17 +00:00
1b849d59de Merge branch 'bugfix/freemodbus_fix_parity_propagation_issue_v43' into 'release/v4.3'
freemodbus: fix mb controller parity propagation issues (Backport v4.3)

See merge request espressif/esp-idf!12390
2021-02-22 05:23:33 +00:00
87576aba28 provisioning: Fix enable CONFIG_EXAMPLE_RESET_PROVISIONED will cause device crash 2021-02-22 04:33:59 +00:00
57fb076590 Merge branch 'feature/prefer_python3_in_installer_v4.3' into 'release/v4.3'
tools: Prefer python3 during install and export (v4.3)

See merge request espressif/esp-idf!12376
2021-02-19 15:52:32 +00:00
360e7c4d51 system: enable shared stack watchpoint
Enable shared stack watchpoint for overflow detection

Enable unit tests:
 * "test printf using shared buffer stack" for C3
 * "Test vTaskDelayUntil" for S2
 * "UART can do poll()" for C3
2021-02-19 16:59:29 +08:00
9083ef97e5 spi_flash: disable mmap into instr space unit test for C3
On C3 the cache is programmatically split between Icache and dcache and with the default setup we dont leave a lot pages
available for additional mmaps into instruction space. Disabling this test for now since any hypothetical use case for this
is no longer supported "out of the box"
2021-02-19 16:58:47 +08:00
a7a1e4dfba CI: run C3 unit test on protected branches 2021-02-19 16:47:00 +08:00
8ad92a92b9 Merge branch 'feature/ulp_c3_not_exists_update_doc_v4.3' into 'release/v4.3'
doc/ulp(esp32c3): Excludes ulp and some RTC features from ESP32C3 doc (v4.3)

See merge request espressif/esp-idf!12386
2021-02-17 03:54:01 +00:00
321ee21c4c freemodbus: fix mb controller parity propagation issues
Closes https://github.com/espressif/esp-idf/issues/6377
2021-02-16 10:47:43 +01:00
33820657c5 doc/ulp(esp32c3): Excludes ulp and some RTC features from ESP32C3 doc 2021-02-16 15:37:26 +08:00
33892aadb9 tools: Prefer python3 during install and export
Install and export script should work on systems without "python"
executable.

Closes https://github.com/espressif/esp-idf/pull/6471

Closes https://github.com/espressif/esp-idf/issues/6532

Related to https://github.com/espressif/esp-idf/issues/6421 and
https://github.com/espressif/arduino-esp32/issues/4717
2021-02-14 18:49:21 +01:00
ad669801ae Fix eclipse build: “UnicodeDecodeError: 'ascii' codec can't decode byte”
Closes https://github.com/espressif/esp-idf/pull/6505
2021-02-10 12:48:44 +01:00
269 changed files with 8307 additions and 2681 deletions

View File

@ -274,6 +274,10 @@ menu "Bootloader config"
config BOOTLOADER_SKIP_VALIDATE_IN_DEEP_SLEEP
bool "Skip image validation when exiting deep sleep"
# note: dependencies for this config item are different to other "skip image validation"
# options, allowing to turn on "allow insecure options" and have secure boot with
# "skip validation when existing deep sleep". Keeping this to avoid a breaking change,
# but - as noted in help - it invalidates the integrity of Secure Boot checks
depends on (SECURE_BOOT && SECURE_BOOT_INSECURE) || !SECURE_BOOT
default n
help
@ -286,6 +290,48 @@ menu "Bootloader config"
partition as this would skip the validation upon first load of the new
OTA partition.
It is possible to enable this option with Secure Boot if "allow insecure
options" is enabled, however it's strongly recommended to NOT enable it as
it may allow a Secure Boot bypass.
config BOOTLOADER_SKIP_VALIDATE_ON_POWER_ON
bool "Skip image validation from power on reset (READ HELP FIRST)"
# only available if both Secure Boot and Check Signature on Boot are disabled
depends on !SECURE_SIGNED_ON_BOOT
default n
help
Some applications need to boot very quickly from power on. By default, the entire app binary
is read from flash and verified which takes up a significant portion of the boot time.
Enabling this option will skip validation of the app when the SoC boots from power on.
Note that in this case it's not possible for the bootloader to detect if an app image is
corrupted in the flash, therefore it's not possible to safely fall back to a different app
partition. Flash corruption of this kind is unlikely but can happen if there is a serious
firmware bug or physical damage.
Following other reset types, the bootloader will still validate the app image. This increases
the chances that flash corruption resulting in a crash can be detected following soft reset, and
the bootloader will fall back to a valid app image. To increase the chances of successfully recovering
from a flash corruption event, keep the option BOOTLOADER_WDT_ENABLE enabled and consider also enabling
BOOTLOADER_WDT_DISABLE_IN_USER_CODE - then manually disable the RTC Watchdog once the app is running.
In addition, enable both the Task and Interrupt watchdog timers with reset options set.
config BOOTLOADER_SKIP_VALIDATE_ALWAYS
bool "Skip image validation always (READ HELP FIRST)"
# only available if both Secure Boot and Check Signature on Boot are disabled
depends on !SECURE_SIGNED_ON_BOOT
default n
select BOOTLOADER_SKIP_VALIDATE_IN_DEEP_SLEEP
select BOOTLOADER_SKIP_VALIDATE_ON_POWER_ON
help
Selecting this option prevents the bootloader from ever validating the app image before
booting it. Any flash corruption of the selected app partition will make the entire SoC
unbootable.
Although flash corruption is a very rare case, it is not recommended to select this option.
Consider selecting "Skip image validation from power on reset" instead. However, if boot time
is the only important factor then it can be enabled.
config BOOTLOADER_RESERVE_RTC_SIZE
hex
default 0x10 if BOOTLOADER_SKIP_VALIDATE_IN_DEEP_SLEEP || BOOTLOADER_CUSTOM_RESERVE_RTC

View File

@ -327,11 +327,24 @@ err:
esp_err_t bootloader_load_image(const esp_partition_pos_t *part, esp_image_metadata_t *data)
{
#ifdef BOOTLOADER_BUILD
return image_load(ESP_IMAGE_LOAD, part, data);
#else
#if !defined(BOOTLOADER_BUILD)
return ESP_FAIL;
#endif
#else
esp_image_load_mode_t mode = ESP_IMAGE_LOAD;
#if !defined(CONFIG_SECURE_BOOT)
/* Skip validation under particular configurations */
#if CONFIG_BOOTLOADER_SKIP_VALIDATE_ALWAYS
mode = ESP_IMAGE_LOAD_NO_VALIDATE;
#elif CONFIG_BOOTLOADER_SKIP_VALIDATE_ON_POWER_ON
if (rtc_get_reset_reason(0) == POWERON_RESET) {
mode = ESP_IMAGE_LOAD_NO_VALIDATE;
}
#endif // CONFIG_BOOTLOADER_SKIP_...
#endif // CONFIG_SECURE_BOOT
return image_load(mode, part, data);
#endif // BOOTLOADER_BUILD
}
esp_err_t bootloader_load_image_no_verify(const esp_partition_pos_t *part, esp_image_metadata_t *data)

View File

@ -1442,16 +1442,12 @@ esp_err_t esp_bt_controller_deinit(void)
static void bt_shutdown(void)
{
esp_err_t ret = ESP_OK;
ESP_LOGD(BTDM_LOG_TAG, "stop/deinit bt");
ESP_LOGD(BTDM_LOG_TAG, "stop Bluetooth");
ret = esp_bt_controller_disable();
if (ESP_OK != ret) {
ESP_LOGW(BTDM_LOG_TAG, "controller disable ret=%d", ret);
}
ret = esp_bt_controller_deinit();
if (ESP_OK != ret) {
ESP_LOGW(BTDM_LOG_TAG, "controller deinit ret=%d", ret);
}
return;
}

View File

@ -740,13 +740,16 @@ static void IRAM_ATTR btdm_sleep_exit_phase0(void *param)
}
#endif
btdm_wakeup_request();
int event = (int) param;
if (event == BTDM_ASYNC_WAKEUP_SRC_VHCI || event == BTDM_ASYNC_WAKEUP_SRC_DISA) {
btdm_wakeup_request();
}
if (s_lp_cntl.wakeup_timer_required && s_lp_stat.wakeup_timer_started) {
esp_timer_stop(s_btdm_slp_tmr);
s_lp_stat.wakeup_timer_started = 0;
}
int event = (int) param;
if (event == BTDM_ASYNC_WAKEUP_SRC_VHCI || event == BTDM_ASYNC_WAKEUP_SRC_DISA) {
semphr_give_wrapper(s_wakeup_req_sem);
}

View File

@ -133,7 +133,8 @@ esp_err_t esp_spp_start_srv(esp_spp_sec_t sec_mask,
btc_spp_args_t arg;
ESP_BLUEDROID_STATUS_CHECK(ESP_BLUEDROID_STATUS_ENABLED);
if (strlen(name) > ESP_SPP_SERVER_NAME_MAX) {
if (name == NULL || strlen(name) > ESP_SPP_SERVER_NAME_MAX) {
LOG_ERROR("Invalid server name!\n");
return ESP_ERR_INVALID_ARG;
}
@ -157,13 +158,34 @@ esp_err_t esp_spp_start_srv(esp_spp_sec_t sec_mask,
esp_err_t esp_spp_stop_srv(void)
{
btc_msg_t msg;
btc_spp_args_t arg;
ESP_BLUEDROID_STATUS_CHECK(ESP_BLUEDROID_STATUS_ENABLED);
msg.sig = BTC_SIG_API_CALL;
msg.pid = BTC_PID_SPP;
msg.act = BTC_SPP_ACT_STOP_SRV;
arg.stop_srv.scn = BTC_SPP_INVALID_SCN;
return (btc_transfer_context(&msg, NULL, sizeof(btc_spp_args_t), NULL) == BT_STATUS_SUCCESS ? ESP_OK : ESP_FAIL);
return (btc_transfer_context(&msg, &arg, sizeof(btc_spp_args_t), NULL) == BT_STATUS_SUCCESS ? ESP_OK : ESP_FAIL);
}
esp_err_t esp_spp_stop_srv_scn(uint8_t scn)
{
btc_msg_t msg;
btc_spp_args_t arg;
ESP_BLUEDROID_STATUS_CHECK(ESP_BLUEDROID_STATUS_ENABLED);
if ((scn == 0) || (scn >= PORT_MAX_RFC_PORTS)) {
LOG_ERROR("Invalid SCN!\n");
return ESP_ERR_INVALID_ARG;
}
msg.sig = BTC_SIG_API_CALL;
msg.pid = BTC_PID_SPP;
msg.act = BTC_SPP_ACT_STOP_SRV;
arg.stop_srv.scn = scn;
return (btc_transfer_context(&msg, &arg, sizeof(btc_spp_args_t), NULL) == BT_STATUS_SUCCESS ? ESP_OK : ESP_FAIL);
}

View File

@ -26,11 +26,12 @@ typedef enum {
ESP_SPP_SUCCESS = 0, /*!< Successful operation. */
ESP_SPP_FAILURE, /*!< Generic failure. */
ESP_SPP_BUSY, /*!< Temporarily can not handle this request. */
ESP_SPP_NO_DATA, /*!< no data. */
ESP_SPP_NO_DATA, /*!< No data */
ESP_SPP_NO_RESOURCE, /*!< No more resource */
ESP_SPP_NEED_INIT, /*!< SPP module shall init first */
ESP_SPP_NEED_DEINIT, /*!< SPP module shall deinit first */
ESP_SPP_NO_CONNECTION, /*!< connection may have been closed */
ESP_SPP_NO_CONNECTION, /*!< Connection may have been closed */
ESP_SPP_NO_SERVER, /*!< No SPP server */
} esp_spp_status_t;
/* Security Setting Mask
@ -101,9 +102,10 @@ typedef union {
* @brief SPP_DISCOVERY_COMP_EVT
*/
struct spp_discovery_comp_evt_param {
esp_spp_status_t status; /*!< status */
uint8_t scn_num; /*!< The num of scn_num */
uint8_t scn[ESP_SPP_MAX_SCN]; /*!< channel # */
esp_spp_status_t status; /*!< status */
uint8_t scn_num; /*!< The num of scn_num */
uint8_t scn[ESP_SPP_MAX_SCN]; /*!< channel # */
const char *service_name[ESP_SPP_MAX_SCN]; /*!< service_name */
} disc_comp; /*!< SPP callback param of SPP_DISCOVERY_COMP_EVT */
/**
@ -143,6 +145,7 @@ typedef union {
esp_spp_status_t status; /*!< status */
uint32_t handle; /*!< The connection handle */
uint8_t sec_id; /*!< security ID used by this server */
uint8_t scn; /*!< Server channel number */
bool use_co; /*!< TRUE to use co_rfc_data */
} start; /*!< SPP callback param of ESP_SPP_START_EVT */
@ -151,6 +154,7 @@ typedef union {
*/
struct spp_srv_stop_evt_param {
esp_spp_status_t status; /*!< status */
uint8_t scn; /*!< Server channel number */
} srv_stop; /*!< SPP callback param of ESP_SPP_SRV_STOP_EVT */
/**
@ -304,7 +308,7 @@ esp_err_t esp_spp_disconnect(uint32_t handle);
esp_err_t esp_spp_start_srv(esp_spp_sec_t sec_mask, esp_spp_role_t role, uint8_t local_scn, const char *name);
/**
* @brief This function stops a SPP server.
* @brief This function stops all SPP servers.
* The operation will close all active SPP connection first, then the callback function will be called
* with ESP_SPP_CLOSE_EVT, and the number of ESP_SPP_CLOSE_EVT is equal to the number of connection.
* When the operation is completed, the callback is called with ESP_SPP_SRV_STOP_EVT.
@ -314,8 +318,24 @@ esp_err_t esp_spp_start_srv(esp_spp_sec_t sec_mask, esp_spp_role_t role, uint8_t
* - ESP_OK: success
* - other: failed
*/
esp_err_t esp_spp_stop_srv(void);
/**
* @brief This function stops a specific SPP server.
* The operation will close all active SPP connection first on the specific SPP server, then the callback function will be called
* with ESP_SPP_CLOSE_EVT, and the number of ESP_SPP_CLOSE_EVT is equal to the number of connection.
* When the operation is completed, the callback is called with ESP_SPP_SRV_STOP_EVT.
* This funciton must be called after esp_spp_init() successful and before esp_spp_deinit().
*
* @param[in] scn: Server channel number.
*
* @return
* - ESP_OK: success
* - other: failed
*/
esp_err_t esp_spp_stop_srv_scn(uint8_t scn);
/**
* @brief This function is used to write data, only for ESP_SPP_MODE_CB.
* When this function need to be called repeatedly, it is strongly recommended to call this function again after

View File

@ -172,9 +172,10 @@ typedef struct {
/* data associated with BTA_JV_DISCOVERY_COMP_EVT_ */
typedef struct {
tBTA_JV_STATUS status; /* Whether the operation succeeded or failed. */
UINT8 scn_num; /* num of channel */
UINT8 scn[BTA_JV_MAX_SCN]; /* channel # */
tBTA_JV_STATUS status; /* Whether the operation succeeded or failed. */
UINT8 scn_num; /* num of channel */
UINT8 scn[BTA_JV_MAX_SCN]; /* channel # */
const char *service_name[BTA_JV_MAX_SCN]; /* service_name */
} tBTA_JV_DISCOVERY_COMP;
/* data associated with BTA_JV_CREATE_RECORD_EVT */
@ -305,6 +306,7 @@ typedef struct {
tBTA_JV_STATUS status; /* Whether the operation succeeded or failed. */
UINT32 handle; /* The connection handle */
UINT8 sec_id; /* security ID used by this server */
UINT8 scn; /* Server channe number */
BOOLEAN use_co; /* TRUE to use co_rfc_data */
} tBTA_JV_RFCOMM_START;
@ -376,8 +378,9 @@ typedef struct {
/* data associated with BTA_JV_FREE_SCN_EVT */
typedef struct {
tBTA_JV_STATUS status; /* Status of the operation */
tBTA_JV_SERVER_STATUS server_status;
tBTA_JV_STATUS status; /* Status of the operation */
tBTA_JV_SERVER_STATUS server_status; /* Server status */
UINT8 scn; /* Server channe number */
} tBTA_JV_FREE_SCN;

View File

@ -856,6 +856,7 @@ void bta_jv_free_scn(tBTA_JV_MSG *p_data)
tBTA_JV_FREE_SCN evt_data = {
.status = BTA_JV_SUCCESS,
.server_status = BTA_JV_SERVER_STATUS_MAX,
.scn = scn
};
tBTA_JV_FREE_SCN_USER_DATA *user_data = NULL;
@ -949,6 +950,7 @@ static void bta_jv_start_discovery_cback(UINT16 result, void *user_data)
status = BTA_JV_FAILURE;
if (result == SDP_SUCCESS || result == SDP_DB_FULL) {
tSDP_DISC_REC *p_sdp_rec = NULL;
tSDP_DISC_ATTR *p_attr = NULL;
tSDP_PROTOCOL_ELEM pe;
logu("bta_jv_cb.uuid", bta_jv_cb.uuid.uu.uuid128);
tBT_UUID su = shorten_sdp_uuid(&bta_jv_cb.uuid);
@ -957,7 +959,13 @@ static void bta_jv_start_discovery_cback(UINT16 result, void *user_data)
p_sdp_rec = SDP_FindServiceUUIDInDb(p_bta_jv_cfg->p_sdp_db, &su, p_sdp_rec);
APPL_TRACE_DEBUG("p_sdp_rec:%p", p_sdp_rec);
if (p_sdp_rec && SDP_FindProtocolListElemInRec(p_sdp_rec, UUID_PROTOCOL_RFCOMM, &pe)){
dcomp.scn[dcomp.scn_num++] = (UINT8) pe.params[0];
dcomp.scn[dcomp.scn_num] = (UINT8) pe.params[0];
if ((p_attr = SDP_FindAttributeInRec(p_sdp_rec, ATTR_ID_SERVICE_NAME)) != NULL) {
dcomp.service_name[dcomp.scn_num] = (char *)p_attr->attr_value.v.array;
} else {
dcomp.service_name[dcomp.scn_num] = NULL;
}
dcomp.scn_num++;
status = BTA_JV_SUCCESS;
}
} while (p_sdp_rec);
@ -2154,6 +2162,7 @@ void bta_jv_rfcomm_start_server(tBTA_JV_MSG *p_data)
evt_data.status = BTA_JV_SUCCESS;
evt_data.handle = p_pcb->handle;
evt_data.sec_id = sec_id;
evt_data.scn = rs->local_scn;
evt_data.use_co = TRUE;
PORT_ClearKeepHandleFlag(handle);

View File

@ -28,6 +28,8 @@
#define ESP_SPP_RINGBUF_SIZE 1000
#define BTC_SPP_INVALID_SCN 0x00
typedef enum {
BTC_SPP_ACT_INIT = 0,
BTC_SPP_ACT_UNINIT,
@ -74,6 +76,10 @@ typedef union {
UINT8 max_session;
char name[ESP_SPP_SERVER_NAME_MAX + 1];
} start_srv;
//BTC_SPP_ACT_STOP_SRV
struct stop_srv_arg {
UINT8 scn;
} stop_srv;
//BTC_SPP_ACT_WRITE
struct write_arg {
UINT32 handle;

View File

@ -265,6 +265,10 @@ static void btc_create_server_fail_cb(void)
{
esp_spp_cb_param_t param;
param.start.status = ESP_SPP_FAILURE;
param.start.handle = 0;
param.start.sec_id = 0;
param.start.scn = BTC_SPP_INVALID_SCN;
param.start.use_co = FALSE;
btc_spp_cb_to_app(ESP_SPP_START_EVT, &param);
}
@ -531,6 +535,7 @@ static void btc_spp_uninit(void)
esp_spp_cb_param_t param;
BTC_TRACE_ERROR("%s unable to malloc user data!", __func__);
param.srv_stop.status = ESP_SPP_NO_RESOURCE;
param.srv_stop.scn = spp_local_param.spp_slots[i]->scn;
btc_spp_cb_to_app(ESP_SPP_SRV_STOP_EVT, &param);
}
BTA_JvFreeChannel(spp_local_param.spp_slots[i]->scn, BTA_JV_CONN_TYPE_RFCOMM,
@ -672,61 +677,117 @@ static void btc_spp_start_srv(btc_spp_args_t *arg)
if (ret != ESP_SPP_SUCCESS) {
esp_spp_cb_param_t param;
param.start.status = ret;
param.start.handle = 0;
param.start.sec_id = 0;
param.start.scn = BTC_SPP_INVALID_SCN;
param.start.use_co = FALSE;
btc_spp_cb_to_app(ESP_SPP_START_EVT, &param);
}
}
static void btc_spp_stop_srv(void)
static void btc_spp_stop_srv(btc_spp_args_t *arg)
{
esp_spp_status_t ret = ESP_SPP_SUCCESS;
bool is_remove_all = false;
uint8_t i, j, srv_cnt = 0;
uint8_t *srv_scn_arr = osi_malloc(MAX_RFC_PORTS);
if (arg->stop_srv.scn == BTC_SPP_INVALID_SCN) {
is_remove_all = true;
}
do {
if (!is_spp_init()) {
BTC_TRACE_ERROR("%s SPP have not been init\n", __func__);
ret = ESP_SPP_NEED_INIT;
break;
}
if (srv_scn_arr == NULL) {
BTC_TRACE_ERROR("%s malloc srv_scn_arr failed\n", __func__);
ret = ESP_SPP_NO_RESOURCE;
break;
}
osi_mutex_lock(&spp_local_param.spp_slot_mutex, OSI_MUTEX_MAX_TIMEOUT);
// first, remove all connection
for (size_t i = 1; i <= MAX_RFC_PORTS; i++) {
if (spp_local_param.spp_slots[i] != NULL && spp_local_param.spp_slots[i]->connected) {
BTA_JvRfcommClose(spp_local_param.spp_slots[i]->rfc_handle, (tBTA_JV_RFCOMM_CBACK *)btc_spp_rfcomm_inter_cb,
(void *)spp_local_param.spp_slots[i]->id);
// [1] find all server
for (i = 1; i <= MAX_RFC_PORTS; i++) {
if (spp_local_param.spp_slots[i] != NULL && !spp_local_param.spp_slots[i]->connected &&
spp_local_param.spp_slots[i]->sdp_handle > 0) {
if (is_remove_all) {
srv_scn_arr[srv_cnt++] = spp_local_param.spp_slots[i]->scn;
} else if (spp_local_param.spp_slots[i]->scn == arg->stop_srv.scn) {
srv_scn_arr[srv_cnt++] = spp_local_param.spp_slots[i]->scn;
break;
}
}
}
// second, remove all server
for (size_t i = 1; i <= MAX_RFC_PORTS; i++) {
if (spp_local_param.spp_slots[i] != NULL && !spp_local_param.spp_slots[i]->connected) {
if (spp_local_param.spp_slots[i]->sdp_handle > 0) {
BTA_JvDeleteRecord(spp_local_param.spp_slots[i]->sdp_handle);
}
if (srv_cnt == 0) {
if (is_remove_all) {
BTC_TRACE_ERROR("%s can not find any server!\n", __func__);
} else {
BTC_TRACE_ERROR("%s can not find server:%d!\n", __func__, arg->stop_srv.scn);
}
ret = ESP_SPP_NO_SERVER;
break;
}
if (spp_local_param.spp_slots[i]->rfc_handle > 0) {
BTA_JvRfcommStopServer(spp_local_param.spp_slots[i]->rfc_handle,
(void *)spp_local_param.spp_slots[i]->id);
// [2] remove all local related connection
for (j = 0; j < srv_cnt; j++) {
for (i = 1; i <= MAX_RFC_PORTS; i++) {
if (spp_local_param.spp_slots[i] != NULL && spp_local_param.spp_slots[i]->connected &&
spp_local_param.spp_slots[i]->sdp_handle > 0 &&
spp_local_param.spp_slots[i]->scn == srv_scn_arr[j]) {
BTA_JvRfcommClose(spp_local_param.spp_slots[i]->rfc_handle,
(tBTA_JV_RFCOMM_CBACK *)btc_spp_rfcomm_inter_cb,
(void *)spp_local_param.spp_slots[i]->id);
}
}
}
tBTA_JV_FREE_SCN_USER_DATA *user_data = osi_malloc(sizeof(tBTA_JV_FREE_SCN_USER_DATA));
if (user_data) {
user_data->server_status = BTA_JV_SERVER_RUNNING;
user_data->slot_id = spp_local_param.spp_slots[i]->id;
} else {
esp_spp_cb_param_t param;
BTC_TRACE_ERROR("%s unable to malloc user data!", __func__);
param.srv_stop.status = ESP_SPP_NO_RESOURCE;
btc_spp_cb_to_app(ESP_SPP_SRV_STOP_EVT, &param);
// [3] remove all server
for (j = 0; j < srv_cnt; j++) {
for (i = 1; i <= MAX_RFC_PORTS; i++) {
if (spp_local_param.spp_slots[i] != NULL && !spp_local_param.spp_slots[i]->connected &&
spp_local_param.spp_slots[i]->sdp_handle > 0 &&
spp_local_param.spp_slots[i]->scn == srv_scn_arr[j]) {
if (spp_local_param.spp_slots[i]->sdp_handle > 0) {
BTA_JvDeleteRecord(spp_local_param.spp_slots[i]->sdp_handle);
}
if (spp_local_param.spp_slots[i]->rfc_handle > 0) {
BTA_JvRfcommStopServer(spp_local_param.spp_slots[i]->rfc_handle,
(void *)spp_local_param.spp_slots[i]->id);
}
tBTA_JV_FREE_SCN_USER_DATA *user_data = osi_malloc(sizeof(tBTA_JV_FREE_SCN_USER_DATA));
if (user_data) {
user_data->server_status = BTA_JV_SERVER_RUNNING;
user_data->slot_id = spp_local_param.spp_slots[i]->id;
} else {
esp_spp_cb_param_t param;
BTC_TRACE_ERROR("%s unable to malloc user data!", __func__);
param.srv_stop.status = ESP_SPP_NO_RESOURCE;
param.srv_stop.scn = spp_local_param.spp_slots[i]->scn;
btc_spp_cb_to_app(ESP_SPP_SRV_STOP_EVT, &param);
}
BTA_JvFreeChannel(spp_local_param.spp_slots[i]->scn, BTA_JV_CONN_TYPE_RFCOMM,
(tBTA_JV_RFCOMM_CBACK *)btc_spp_rfcomm_inter_cb, (void *)user_data);
}
BTA_JvFreeChannel(spp_local_param.spp_slots[i]->scn, BTA_JV_CONN_TYPE_RFCOMM,
(tBTA_JV_RFCOMM_CBACK *)btc_spp_rfcomm_inter_cb, (void *)user_data);
}
}
osi_mutex_unlock(&spp_local_param.spp_slot_mutex);
} while(0);
} while (0);
if (ret != ESP_SPP_SUCCESS) {
esp_spp_cb_param_t param;
param.srv_stop.status = ret;
param.srv_stop.scn = BTC_SPP_INVALID_SCN;
btc_spp_cb_to_app(ESP_SPP_SRV_STOP_EVT, &param);
}
if (srv_scn_arr) {
osi_free(srv_scn_arr);
srv_scn_arr = NULL;
}
}
static void btc_spp_write(btc_spp_args_t *arg)
@ -849,7 +910,7 @@ void btc_spp_call_handler(btc_msg_t *msg)
btc_spp_start_srv(arg);
break;
case BTC_SPP_ACT_STOP_SRV:
btc_spp_stop_srv();
btc_spp_stop_srv(arg);
break;
case BTC_SPP_ACT_WRITE:
btc_spp_write(arg);
@ -876,6 +937,8 @@ void btc_spp_cb_handler(btc_msg_t *msg)
param.disc_comp.status = p_data->disc_comp.status;
param.disc_comp.scn_num = p_data->disc_comp.scn_num;
memcpy(param.disc_comp.scn, p_data->disc_comp.scn, p_data->disc_comp.scn_num);
memcpy(param.disc_comp.service_name, p_data->disc_comp.service_name,
p_data->disc_comp.scn_num * sizeof(const char *));
btc_spp_cb_to_app(ESP_SPP_DISCOVERY_COMP_EVT, &param);
break;
case BTA_JV_RFCOMM_CL_INIT_EVT:
@ -909,6 +972,7 @@ void btc_spp_cb_handler(btc_msg_t *msg)
param.start.status = p_data->rfc_start.status;
param.start.handle = p_data->rfc_start.handle;
param.start.sec_id = p_data->rfc_start.sec_id;
param.start.scn = p_data->rfc_start.scn;
param.start.use_co = p_data->rfc_start.use_co;
btc_spp_cb_to_app(ESP_SPP_START_EVT, &param);
break;
@ -1130,6 +1194,7 @@ void btc_spp_cb_handler(btc_msg_t *msg)
case BTA_JV_FREE_SCN_EVT:
if (p_data->free_scn.server_status == BTA_JV_SERVER_RUNNING) {
param.srv_stop.status = p_data->free_scn.status;
param.srv_stop.scn = p_data->free_scn.scn;
btc_spp_cb_to_app(ESP_SPP_SRV_STOP_EVT, &param);
}
break;

View File

@ -414,12 +414,13 @@ void btm_ble_resolve_random_addr(BD_ADDR random_bda, tBTM_BLE_RESOLVE_CBACK *p_c
/* start to resolve random address */
/* check for next security record */
for (p_node = list_begin(btm_cb.p_sec_dev_rec_list); p_node; p_node = list_next(p_node)) {
p_dev_rec = list_node(p_node);
p_dev_rec = list_node(p_node);
p_mgnt_cb->p_dev_rec = p_dev_rec;
if (btm_ble_match_random_bda(p_dev_rec)) {
break;
}
}
break;
}
p_mgnt_cb->p_dev_rec = NULL;
}
btm_ble_resolve_address_cmpl();
} else {
(*p_cback)(NULL, p);

View File

@ -4589,26 +4589,6 @@ void btm_sec_disconnected (UINT16 handle, UINT8 reason)
#endif ///BT_USE_TRACES == TRUE && SMP_INCLUDED == TRUE
BTM_TRACE_EVENT("%s before update sec_flags=0x%x\n", __func__, p_dev_rec->sec_flags);
/* If we are in the process of bonding we need to tell client that auth failed */
if ( (btm_cb.pairing_state != BTM_PAIR_STATE_IDLE)
&& (memcmp (btm_cb.pairing_bda, p_dev_rec->bd_addr, BD_ADDR_LEN) == 0)) {
btm_sec_change_pairing_state (BTM_PAIR_STATE_IDLE);
p_dev_rec->sec_flags &= ~BTM_SEC_LINK_KEY_KNOWN;
if (btm_cb.api.p_auth_complete_callback) {
/* If the disconnection reason is REPEATED_ATTEMPTS,
send this error message to complete callback function
to display the error message of Repeated attempts.
All others, send HCI_ERR_AUTH_FAILURE. */
if (reason == HCI_ERR_REPEATED_ATTEMPTS) {
result = HCI_ERR_REPEATED_ATTEMPTS;
} else if (old_pairing_flags & BTM_PAIR_FLAGS_WE_STARTED_DD) {
result = HCI_ERR_HOST_REJECT_SECURITY;
}
(*btm_cb.api.p_auth_complete_callback) (p_dev_rec->bd_addr, p_dev_rec->dev_class,
p_dev_rec->sec_bd_name, result);
}
}
#if BLE_INCLUDED == TRUE && SMP_INCLUDED == TRUE
btm_ble_update_mode_operation(HCI_ROLE_UNKNOWN, p_dev_rec->bd_addr, HCI_SUCCESS);
/* see sec_flags processing in btm_acl_removed */
@ -4645,6 +4625,26 @@ void btm_sec_disconnected (UINT16 handle, UINT8 reason)
}
BTM_TRACE_EVENT("%s after update sec_flags=0x%x\n", __func__, p_dev_rec->sec_flags);
/* If we are in the process of bonding we need to tell client that auth failed */
if ( (btm_cb.pairing_state != BTM_PAIR_STATE_IDLE)
&& (memcmp (btm_cb.pairing_bda, p_dev_rec->bd_addr, BD_ADDR_LEN) == 0)) {
btm_sec_change_pairing_state (BTM_PAIR_STATE_IDLE);
p_dev_rec->sec_flags &= ~BTM_SEC_LINK_KEY_KNOWN;
if (btm_cb.api.p_auth_complete_callback) {
/* If the disconnection reason is REPEATED_ATTEMPTS,
send this error message to complete callback function
to display the error message of Repeated attempts.
All others, send HCI_ERR_AUTH_FAILURE. */
if (reason == HCI_ERR_REPEATED_ATTEMPTS) {
result = HCI_ERR_REPEATED_ATTEMPTS;
} else if (old_pairing_flags & BTM_PAIR_FLAGS_WE_STARTED_DD) {
result = HCI_ERR_HOST_REJECT_SECURITY;
}
(*btm_cb.api.p_auth_complete_callback) (p_dev_rec->bd_addr, p_dev_rec->dev_class,
p_dev_rec->sec_bd_name, result);
}
}
}
/*******************************************************************************

View File

@ -32,6 +32,7 @@
#include "hal/adc_hal.h"
#include "hal/dma_types.h"
#include "esp32c3/esp_efuse_rtc_calib.h"
#include "esp_private/gdma.h"
#define ADC_CHECK_RET(fun_ret) ({ \
if (fun_ret != ESP_OK) { \
@ -87,6 +88,7 @@ typedef struct adc_digi_context_t {
uint8_t *rx_dma_buf; //dma buffer
adc_dma_hal_context_t hal_dma; //dma context (hal)
adc_dma_hal_config_t hal_dma_config; //dma config (hal)
gdma_channel_handle_t rx_dma_channel; //dma rx channel handle
RingbufHandle_t ringbuf_hdl; //RX ringbuffer handler
bool ringbuf_overflow_flag; //1: ringbuffer overflow
bool driver_start_flag; //1: driver is started; 0: driver is stoped
@ -104,7 +106,7 @@ static uint32_t adc_get_calibration_offset(adc_ll_num_t adc_n, adc_channel_t cha
/*---------------------------------------------------------------
ADC Continuous Read Mode (via DMA)
---------------------------------------------------------------*/
static void adc_dma_intr(void* arg);
static IRAM_ATTR bool adc_dma_in_suc_eof_callback(gdma_channel_handle_t dma_chan, gdma_event_data_t *event_data, void *user_data);
static int8_t adc_digi_get_io_num(uint8_t adc_unit, uint8_t adc_channel)
{
@ -149,11 +151,6 @@ esp_err_t adc_digi_initialize(const adc_digi_init_config_t *init_config)
goto cleanup;
}
ret = esp_intr_alloc(SOC_GDMA_ADC_INTR_SOURCE, 0, adc_dma_intr, (void *)s_adc_digi_ctx, &s_adc_digi_ctx->dma_intr_hdl);
if (ret != ESP_OK) {
goto cleanup;
}
//ringbuffer
s_adc_digi_ctx->ringbuf_hdl = xRingbufferCreate(init_config->max_store_buf_size, RINGBUF_TYPE_BYTEBUF);
if (!s_adc_digi_ctx->ringbuf_hdl) {
@ -161,7 +158,7 @@ esp_err_t adc_digi_initialize(const adc_digi_init_config_t *init_config)
goto cleanup;
}
//malloc internal buffer
//malloc internal buffer used by DMA
s_adc_digi_ctx->bytes_between_intr = init_config->conv_num_each_intr;
s_adc_digi_ctx->rx_dma_buf = heap_caps_calloc(1, s_adc_digi_ctx->bytes_between_intr * INTERNAL_BUF_NUM, MALLOC_CAP_INTERNAL);
if (!s_adc_digi_ctx->rx_dma_buf) {
@ -176,7 +173,6 @@ esp_err_t adc_digi_initialize(const adc_digi_init_config_t *init_config)
goto cleanup;
}
s_adc_digi_ctx->hal_dma_config.desc_max_num = INTERNAL_BUF_NUM;
s_adc_digi_ctx->hal_dma_config.dma_chan = init_config->dma_chan;
//malloc pattern table
s_adc_digi_ctx->digi_controller_config.adc_pattern = calloc(1, SOC_ADC_PATT_LEN_MAX * sizeof(adc_digi_pattern_table_t));
@ -185,6 +181,7 @@ esp_err_t adc_digi_initialize(const adc_digi_init_config_t *init_config)
goto cleanup;
}
//init gpio pins
if (init_config->adc1_chan_mask) {
ret = adc_digi_gpio_init(ADC_NUM_1, init_config->adc1_chan_mask);
if (ret != ESP_OK) {
@ -198,8 +195,33 @@ esp_err_t adc_digi_initialize(const adc_digi_init_config_t *init_config)
}
}
//alloc rx gdma channel
gdma_channel_alloc_config_t rx_alloc_config = {
.direction = GDMA_CHANNEL_DIRECTION_RX,
};
ret = gdma_new_channel(&rx_alloc_config, &s_adc_digi_ctx->rx_dma_channel);
if (ret != ESP_OK) {
goto cleanup;
}
gdma_connect(s_adc_digi_ctx->rx_dma_channel, GDMA_MAKE_TRIGGER(GDMA_TRIG_PERIPH_ADC, 0));
gdma_strategy_config_t strategy_config = {
.auto_update_desc = true,
.owner_check = true
};
gdma_apply_strategy(s_adc_digi_ctx->rx_dma_channel, &strategy_config);
gdma_rx_event_callbacks_t cbs = {
.on_recv_eof = adc_dma_in_suc_eof_callback
};
gdma_register_rx_event_callbacks(s_adc_digi_ctx->rx_dma_channel, &cbs, s_adc_digi_ctx);
int dma_chan;
gdma_get_channel_id(s_adc_digi_ctx->rx_dma_channel, &dma_chan);
s_adc_digi_ctx->hal_dma_config.dma_chan = dma_chan;
//enable SARADC module clock
periph_module_enable(PERIPH_SARADC_MODULE);
periph_module_enable(PERIPH_GDMA_MODULE);
adc_hal_calibration_init(ADC_NUM_1);
adc_hal_calibration_init(ADC_NUM_2);
@ -212,45 +234,51 @@ cleanup:
}
static IRAM_ATTR void adc_dma_intr(void *arg)
static IRAM_ATTR bool adc_dma_intr(adc_digi_context_t *adc_digi_ctx);
static IRAM_ATTR bool adc_dma_in_suc_eof_callback(gdma_channel_handle_t dma_chan, gdma_event_data_t *event_data, void *user_data)
{
adc_digi_context_t *adc_digi_ctx = (adc_digi_context_t *)user_data;
return adc_dma_intr(adc_digi_ctx);
}
static IRAM_ATTR bool adc_dma_intr(adc_digi_context_t *adc_digi_ctx)
{
portBASE_TYPE taskAwoken = 0;
BaseType_t ret;
//clear the in suc eof interrupt
adc_hal_digi_clr_intr(&s_adc_digi_ctx->hal_dma, &s_adc_digi_ctx->hal_dma_config, IN_SUC_EOF_BIT);
while (s_adc_digi_ctx->hal_dma_config.cur_desc_ptr->dw0.owner == 0) {
dma_descriptor_t *current_desc = s_adc_digi_ctx->hal_dma_config.cur_desc_ptr;
ret = xRingbufferSendFromISR(s_adc_digi_ctx->ringbuf_hdl, current_desc->buffer, current_desc->dw0.length, &taskAwoken);
while (adc_digi_ctx->hal_dma_config.cur_desc_ptr->dw0.owner == 0) {
dma_descriptor_t *current_desc = adc_digi_ctx->hal_dma_config.cur_desc_ptr;
ret = xRingbufferSendFromISR(adc_digi_ctx->ringbuf_hdl, current_desc->buffer, current_desc->dw0.length, &taskAwoken);
if (ret == pdFALSE) {
//ringbuffer overflow
s_adc_digi_ctx->ringbuf_overflow_flag = 1;
adc_digi_ctx->ringbuf_overflow_flag = 1;
}
s_adc_digi_ctx->hal_dma_config.desc_cnt += 1;
adc_digi_ctx->hal_dma_config.desc_cnt += 1;
//cycle the dma descriptor and buffers
s_adc_digi_ctx->hal_dma_config.cur_desc_ptr = s_adc_digi_ctx->hal_dma_config.cur_desc_ptr->next;
if (!s_adc_digi_ctx->hal_dma_config.cur_desc_ptr) {
adc_digi_ctx->hal_dma_config.cur_desc_ptr = adc_digi_ctx->hal_dma_config.cur_desc_ptr->next;
if (!adc_digi_ctx->hal_dma_config.cur_desc_ptr) {
break;
}
}
if (!s_adc_digi_ctx->hal_dma_config.cur_desc_ptr) {
if (!adc_digi_ctx->hal_dma_config.cur_desc_ptr) {
assert(s_adc_digi_ctx->hal_dma_config.desc_cnt == s_adc_digi_ctx->hal_dma_config.desc_max_num);
assert(adc_digi_ctx->hal_dma_config.desc_cnt == adc_digi_ctx->hal_dma_config.desc_max_num);
//reset the current descriptor status
s_adc_digi_ctx->hal_dma_config.cur_desc_ptr = s_adc_digi_ctx->hal_dma_config.rx_desc;
s_adc_digi_ctx->hal_dma_config.desc_cnt = 0;
adc_digi_ctx->hal_dma_config.cur_desc_ptr = adc_digi_ctx->hal_dma_config.rx_desc;
adc_digi_ctx->hal_dma_config.desc_cnt = 0;
//start next turns of dma operation
adc_hal_digi_dma_multi_descriptor(&s_adc_digi_ctx->hal_dma_config, s_adc_digi_ctx->rx_dma_buf, s_adc_digi_ctx->bytes_between_intr, s_adc_digi_ctx->hal_dma_config.desc_max_num);
adc_hal_digi_rxdma_start(&s_adc_digi_ctx->hal_dma, &s_adc_digi_ctx->hal_dma_config);
adc_hal_digi_dma_multi_descriptor(&adc_digi_ctx->hal_dma_config, adc_digi_ctx->rx_dma_buf, adc_digi_ctx->bytes_between_intr, adc_digi_ctx->hal_dma_config.desc_max_num);
adc_hal_digi_rxdma_start(&adc_digi_ctx->hal_dma, &adc_digi_ctx->hal_dma_config);
}
if(taskAwoken == pdTRUE) {
portYIELD_FROM_ISR();
return true;
} else {
return false;
}
}
@ -387,11 +415,13 @@ esp_err_t adc_digi_deinitialize(void)
free(s_adc_digi_ctx->rx_dma_buf);
free(s_adc_digi_ctx->hal_dma_config.rx_desc);
free(s_adc_digi_ctx->digi_controller_config.adc_pattern);
gdma_disconnect(s_adc_digi_ctx->rx_dma_channel);
gdma_del_channel(s_adc_digi_ctx->rx_dma_channel);
free(s_adc_digi_ctx);
s_adc_digi_ctx = NULL;
periph_module_disable(PERIPH_SARADC_MODULE);
periph_module_disable(PERIPH_GDMA_MODULE);
return ESP_OK;
}

View File

@ -50,17 +50,30 @@ typedef struct gdma_channel_t gdma_channel_t;
typedef struct gdma_tx_channel_t gdma_tx_channel_t;
typedef struct gdma_rx_channel_t gdma_rx_channel_t;
/**
* GDMA driver consists of there object class, namely: Group, Pair and Channel.
* Channel is allocated when user calls `gdma_new_channel`, its lifecycle is maintained by user.
* Pair and Group are all lazy allocated, their life cycles are maintained by this driver.
* We use reference count to track their life cycles, i.e. the driver will free their memory only when their reference count reached to 0.
*
* We don't use an all-in-one spin lock in this driver, instead, we created different spin locks at different level.
* For platform, it has a spinlock, which is used to protect the group handle slots and reference count of each group.
* For group, it has a spinlock, which is used to protect group level stuffs, e.g. hal object, pair handle slots and reference count of each pair.
* For pair, it has a sinlock, which is used to protect pair level stuffs, e.g. interrupt handle, channel handle slots, occupy code.
*/
struct gdma_platform_t {
portMUX_TYPE spinlock; // platform level spinlock
gdma_group_t *groups[SOC_GDMA_GROUPS]; // array of GDMA group instances
int group_ref_counts[SOC_GDMA_GROUPS]; // reference count used to protect group install/uninstall
};
struct gdma_group_t {
int group_id; // Group ID, index from 0
gdma_hal_context_t hal; // HAL instance is at group level
portMUX_TYPE spinlock; // group level spinlock
int ref_count; // reference count
gdma_pair_t *pairs[SOC_GDMA_PAIRS_PER_GROUP]; // handles of GDMA pairs
gdma_pair_t *pairs[SOC_GDMA_PAIRS_PER_GROUP]; // handles of GDMA pairs
int pair_ref_counts[SOC_GDMA_PAIRS_PER_GROUP]; // reference count used to protect pair install/uninstall
};
struct gdma_pair_t {
@ -71,7 +84,6 @@ struct gdma_pair_t {
int occupy_code; // each bit indicates which channel has been occupied (an occupied channel will be skipped during channel search)
intr_handle_t intr; // Interrupt is at pair level
portMUX_TYPE spinlock; // pair level spinlock
int ref_count; // reference count
};
struct gdma_channel_t {
@ -138,9 +150,9 @@ esp_err_t gdma_new_channel(const gdma_channel_alloc_config_t *config, gdma_chann
DMA_CHECK(config->sibling_chan->direction != config->direction,
"sibling channel should have a different direction", err, ESP_ERR_INVALID_ARG);
group = pair->group;
portENTER_CRITICAL(&pair->spinlock);
pair->ref_count++; // channel obtains a reference to pair
portEXIT_CRITICAL(&pair->spinlock);
portENTER_CRITICAL(&group->spinlock);
group->pair_ref_counts[pair->pair_id]++; // channel obtains a reference to pair
portEXIT_CRITICAL(&group->spinlock);
goto search_done; // skip the search path below if user has specify a sibling channel
}
@ -152,10 +164,14 @@ esp_err_t gdma_new_channel(const gdma_channel_alloc_config_t *config, gdma_chann
portENTER_CRITICAL(&pair->spinlock);
if (!(search_code & pair->occupy_code)) { // pair has suitable position for acquired channel(s)
pair->occupy_code |= search_code;
pair->ref_count++; // channel obtains a reference to pair
search_code = 0; // exit search loop
}
portEXIT_CRITICAL(&pair->spinlock);
if (!search_code) {
portENTER_CRITICAL(&group->spinlock);
group->pair_ref_counts[j]++; // channel obtains a reference to pair
portEXIT_CRITICAL(&group->spinlock);
}
}
gdma_release_pair_handle(pair);
} // loop used to search pair
@ -404,26 +420,21 @@ err:
return ret_code;
}
static inline bool gdma_is_group_busy(gdma_group_t *group)
{
return group->ref_count;
}
static void gdma_uninstall_group(gdma_group_t *group)
{
int group_id = group->group_id;
bool do_deinitialize = false;
if (s_platform.groups[group_id] && !gdma_is_group_busy(group)) {
portENTER_CRITICAL(&s_platform.spinlock);
if (s_platform.groups[group_id] && !gdma_is_group_busy(group)) {
do_deinitialize = true;
s_platform.groups[group_id] = NULL; // deregister from platfrom
gdma_ll_enable_clock(group->hal.dev, false);
periph_module_disable(gdma_periph_signals.groups[group_id].module);
}
portEXIT_CRITICAL(&s_platform.spinlock);
portENTER_CRITICAL(&s_platform.spinlock);
s_platform.group_ref_counts[group_id]--;
if (s_platform.group_ref_counts[group_id] == 0) {
assert(s_platform.groups[group_id]);
do_deinitialize = true;
s_platform.groups[group_id] = NULL; // deregister from platfrom
gdma_ll_enable_clock(group->hal.dev, false);
periph_module_disable(gdma_periph_signals.groups[group_id].module);
}
portEXIT_CRITICAL(&s_platform.spinlock);
if (do_deinitialize) {
free(group);
@ -453,7 +464,7 @@ static gdma_group_t *gdma_acquire_group_handle(int group_id)
group = s_platform.groups[group_id];
}
// someone acquired the group handle means we have a new object that refer to this group
group->ref_count++;
s_platform.group_ref_counts[group_id]++;
portEXIT_CRITICAL(&s_platform.spinlock);
if (new_group) {
@ -468,39 +479,31 @@ out:
static void gdma_release_group_handle(gdma_group_t *group)
{
if (group) {
portENTER_CRITICAL(&group->spinlock);
group->ref_count--;
portEXIT_CRITICAL(&group->spinlock);
gdma_uninstall_group(group);
}
}
static inline bool gdma_is_pair_busy(gdma_pair_t *pair)
{
return pair->ref_count;
}
static void gdma_uninstall_pair(gdma_pair_t *pair)
{
gdma_group_t *group = pair->group;
int pair_id = pair->pair_id;
bool do_deinitialize = false;
if (group->pairs[pair_id] && !gdma_is_pair_busy(pair)) {
portENTER_CRITICAL(&group->spinlock);
if (group->pairs[pair_id] && !gdma_is_pair_busy(pair)) {
do_deinitialize = true;
group->pairs[pair_id] = NULL; // deregister from pair
group->ref_count--; // decrease reference count, because this pair won't refer to the group
if (pair->intr) {
// disable interrupt handler (but not freed, esp_intr_free is a blocking API, we can't use it in a critical section)
esp_intr_disable(pair->intr);
gdma_ll_enable_interrupt(group->hal.dev, pair->pair_id, UINT32_MAX, false); // disable all interupt events
gdma_ll_clear_interrupt_status(group->hal.dev, pair->pair_id, UINT32_MAX); // clear all pending events
}
portENTER_CRITICAL(&group->spinlock);
group->pair_ref_counts[pair_id]--;
if (group->pair_ref_counts[pair_id] == 0) {
assert(group->pairs[pair_id]);
do_deinitialize = true;
group->pairs[pair_id] = NULL; // deregister from pair
if (pair->intr) {
// disable interrupt handler (but not freed, esp_intr_free is a blocking API, we can't use it in a critical section)
esp_intr_disable(pair->intr);
gdma_ll_enable_interrupt(group->hal.dev, pair->pair_id, UINT32_MAX, false); // disable all interupt events
gdma_ll_clear_interrupt_status(group->hal.dev, pair->pair_id, UINT32_MAX); // clear all pending events
}
portEXIT_CRITICAL(&group->spinlock);
}
portEXIT_CRITICAL(&group->spinlock);
if (do_deinitialize) {
if (pair->intr) {
esp_intr_free(pair->intr); // free interrupt resource
@ -508,6 +511,7 @@ static void gdma_uninstall_pair(gdma_pair_t *pair)
}
free(pair);
ESP_LOGD(TAG, "del pair (%d,%d)", group->group_id, pair_id);
gdma_uninstall_group(group);
}
}
@ -525,7 +529,6 @@ static gdma_pair_t *gdma_acquire_pair_handle(gdma_group_t *group, int pair_id)
new_pair = true;
pair = pre_alloc_pair;
group->pairs[pair_id] = pair; // register to group
group->ref_count++; // pair obtains a reference to group
pair->group = group;
pair->pair_id = pair_id;
pair->spinlock = (portMUX_TYPE)portMUX_INITIALIZER_UNLOCKED;
@ -533,10 +536,13 @@ static gdma_pair_t *gdma_acquire_pair_handle(gdma_group_t *group, int pair_id)
pair = group->pairs[pair_id];
}
// someone acquired the pair handle means we have a new object that refer to this pair
pair->ref_count++;
group->pair_ref_counts[pair_id]++;
portEXIT_CRITICAL(&group->spinlock);
if (new_pair) {
portENTER_CRITICAL(&s_platform.spinlock);
s_platform.group_ref_counts[group->group_id]++; // pair obtains a reference to group
portEXIT_CRITICAL(&s_platform.spinlock);
ESP_LOGD(TAG, "new pair (%d,%d) at %p", group->group_id, pair->pair_id, pair);
} else {
free(pre_alloc_pair);
@ -548,9 +554,6 @@ out:
static void gdma_release_pair_handle(gdma_pair_t *pair)
{
if (pair) {
portENTER_CRITICAL(&pair->spinlock);
pair->ref_count--;
portEXIT_CRITICAL(&pair->spinlock);
gdma_uninstall_pair(pair);
}
}
@ -558,16 +561,15 @@ static void gdma_release_pair_handle(gdma_pair_t *pair)
static esp_err_t gdma_del_tx_channel(gdma_channel_t *dma_channel)
{
gdma_pair_t *pair = dma_channel->pair;
gdma_group_t *group = pair->group;
gdma_tx_channel_t *tx_chan = __containerof(dma_channel, gdma_tx_channel_t, base);
portENTER_CRITICAL(&pair->spinlock);
pair->tx_chan = NULL;
pair->ref_count--; // decrease reference count, because this channel won't refer to the pair
pair->occupy_code &= ~SEARCH_REQUEST_TX_CHANNEL;
portEXIT_CRITICAL(&pair->spinlock);
ESP_LOGD(TAG, "del tx channel (%d,%d)", pair->group->group_id, pair->pair_id);
ESP_LOGD(TAG, "del tx channel (%d,%d)", group->group_id, pair->pair_id);
free(tx_chan);
gdma_uninstall_pair(pair);
return ESP_OK;
}
@ -575,16 +577,15 @@ static esp_err_t gdma_del_tx_channel(gdma_channel_t *dma_channel)
static esp_err_t gdma_del_rx_channel(gdma_channel_t *dma_channel)
{
gdma_pair_t *pair = dma_channel->pair;
gdma_group_t *group = pair->group;
gdma_rx_channel_t *rx_chan = __containerof(dma_channel, gdma_rx_channel_t, base);
portENTER_CRITICAL(&pair->spinlock);
pair->rx_chan = NULL;
pair->ref_count--; // decrease reference count, because this channel won't refer to the pair
pair->occupy_code &= ~SEARCH_REQUEST_RX_CHANNEL;
portEXIT_CRITICAL(&pair->spinlock);
ESP_LOGD(TAG, "del rx channel (%d,%d)", pair->group->group_id, pair->pair_id);
ESP_LOGD(TAG, "del rx channel (%d,%d)", group->group_id, pair->pair_id);
free(rx_chan);
gdma_uninstall_pair(pair);
return ESP_OK;
}

View File

@ -391,7 +391,8 @@ esp_err_t gpio_config(const gpio_config_t *pGPIOConfig)
gpio_intr_disable(io_num);
}
PIN_FUNC_SELECT(io_reg, PIN_FUNC_GPIO); /*function number 2 is GPIO_FUNC for each pin */
/* By default, all the pins have to be configured as GPIO pins. */
PIN_FUNC_SELECT(io_reg, PIN_FUNC_GPIO);
}
io_num++;
@ -554,9 +555,11 @@ esp_err_t gpio_wakeup_enable(gpio_num_t gpio_num, gpio_int_type_t intr_type)
esp_err_t ret = ESP_OK;
if ((intr_type == GPIO_INTR_LOW_LEVEL) || (intr_type == GPIO_INTR_HIGH_LEVEL)) {
#if SOC_RTCIO_WAKE_SUPPORTED
if (rtc_gpio_is_valid_gpio(gpio_num)) {
ret = rtc_gpio_wakeup_enable(gpio_num, intr_type);
}
#endif
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_wakeup_enable(gpio_context.gpio_hal, gpio_num, intr_type);
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
@ -572,10 +575,11 @@ esp_err_t gpio_wakeup_disable(gpio_num_t gpio_num)
{
GPIO_CHECK(GPIO_IS_VALID_GPIO(gpio_num), "GPIO number error", ESP_ERR_INVALID_ARG);
esp_err_t ret = ESP_OK;
#if SOC_RTCIO_WAKE_SUPPORTED
if (rtc_gpio_is_valid_gpio(gpio_num)) {
ret = rtc_gpio_wakeup_disable(gpio_num);
}
#endif
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_wakeup_disable(gpio_context.gpio_hal, gpio_num);
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
@ -630,7 +634,9 @@ esp_err_t gpio_hold_en(gpio_num_t gpio_num)
int ret = ESP_OK;
if (rtc_gpio_is_valid_gpio(gpio_num)) {
#if SOC_RTCIO_HOLD_SUPPORTED
ret = rtc_gpio_hold_en(gpio_num);
#endif
} else if (GPIO_HOLD_MASK[gpio_num]) {
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_hold_en(gpio_context.gpio_hal, gpio_num);
@ -648,7 +654,9 @@ esp_err_t gpio_hold_dis(gpio_num_t gpio_num)
int ret = ESP_OK;
if (rtc_gpio_is_valid_gpio(gpio_num)) {
#if SOC_RTCIO_HOLD_SUPPORTED
ret = rtc_gpio_hold_dis(gpio_num);
#endif
}else if (GPIO_HOLD_MASK[gpio_num]) {
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_hold_dis(gpio_context.gpio_hal, gpio_num);
@ -678,7 +686,9 @@ void gpio_deep_sleep_hold_dis(void)
esp_err_t gpio_force_hold_all()
{
#if SOC_RTCIO_HOLD_SUPPORTED
rtc_gpio_force_hold_all();
#endif
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_force_hold_all(gpio_context.gpio_hal);
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
@ -687,9 +697,11 @@ esp_err_t gpio_force_hold_all()
esp_err_t gpio_force_unhold_all()
{
#if SOC_RTCIO_HOLD_SUPPORTED
rtc_gpio_force_hold_dis_all();
#endif
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_force_unhold_all(gpio_context.gpio_hal);
gpio_hal_force_unhold_all();
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
return ESP_OK;
}
@ -876,5 +888,35 @@ esp_err_t gpio_sleep_pupd_config_unapply(gpio_num_t gpio_num)
gpio_hal_sleep_pupd_config_unapply(gpio_context.gpio_hal, gpio_num);
return ESP_OK;
}
#endif
#endif
#endif // CONFIG_GPIO_ESP32_SUPPORT_SWITCH_SLP_PULL
#endif // SOC_GPIO_SUPPORT_SLP_SWITCH
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
esp_err_t gpio_deep_sleep_wakeup_enable(gpio_num_t gpio_num, gpio_int_type_t intr_type)
{
if (!gpio_hal_is_valid_deepsleep_wakeup_gpio(gpio_num)) {
ESP_LOGE(GPIO_TAG, "GPIO %d does not support deep sleep wakeup", gpio_num);
return ESP_ERR_INVALID_ARG;
}
if ((intr_type != GPIO_INTR_LOW_LEVEL) && (intr_type != GPIO_INTR_HIGH_LEVEL)) {
ESP_LOGE(GPIO_TAG, "GPIO wakeup only supports level mode, but edge mode set. gpio_num:%u", gpio_num);
return ESP_ERR_INVALID_ARG;
}
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_deepsleep_wakeup_enable(gpio_context.gpio_hal, gpio_num, intr_type);
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
return ESP_OK;
}
esp_err_t gpio_deep_sleep_wakeup_disable(gpio_num_t gpio_num)
{
if (!gpio_hal_is_valid_deepsleep_wakeup_gpio(gpio_num)) {
ESP_LOGE(GPIO_TAG, "GPIO %d does not support deep sleep wakeup", gpio_num);
return ESP_ERR_INVALID_ARG;
}
portENTER_CRITICAL(&gpio_context.gpio_spinlock);
gpio_hal_deepsleep_wakeup_disable(gpio_context.gpio_hal, gpio_num);
portEXIT_CRITICAL(&gpio_context.gpio_spinlock);
return ESP_OK;
}
#endif // SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP

View File

@ -137,9 +137,8 @@ typedef enum {
typedef struct adc_digi_init_config_s {
uint32_t max_store_buf_size; ///< Max length of the converted data that driver can store before they are processed. When this length is reached, driver will dump out all the old data and start to store them again.
uint32_t conv_num_each_intr; ///< Bytes of data that can be converted in 1 interrupt.
uint32_t dma_chan; ///< DMA channel.
uint16_t adc1_chan_mask; ///< Channel list of ADC1 to be initialized.
uint16_t adc2_chan_mask; ///< Channel list of ADC2 to be initialized.
uint32_t adc1_chan_mask; ///< Channel list of ADC1 to be initialized.
uint32_t adc2_chan_mask; ///< Channel list of ADC2 to be initialized.
} adc_digi_init_config_t;
#endif

View File

@ -516,6 +516,38 @@ esp_err_t gpio_sleep_pupd_config_unapply(gpio_num_t gpio_num);
#endif
#endif
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
#define GPIO_IS_DEEP_SLEEP_WAKEUP_VALID_GPIO(gpio_num) ((gpio_num & ~SOC_GPIO_DEEP_SLEEP_WAKEUP_VALID_GPIO_MASK) == 0)
/**
* @brief Enable GPIO deep-sleep wake-up function.
*
* @param gpio_num GPIO number.
*
* @param intr_type GPIO wake-up type. Only GPIO_INTR_LOW_LEVEL or GPIO_INTR_HIGH_LEVEL can be used.
*
* @note Called by the SDK. User shouldn't call this directly in the APP.
*
* @return
* - ESP_OK Success
* - ESP_ERR_INVALID_ARG Parameter error
*/
esp_err_t gpio_deep_sleep_wakeup_enable(gpio_num_t gpio_num, gpio_int_type_t intr_type);
/**
* @brief Disable GPIO deep-sleep wake-up function.
*
* @param gpio_num GPIO number
*
* @return
* - ESP_OK Success
* - ESP_ERR_INVALID_ARG Parameter error
*/
esp_err_t gpio_deep_sleep_wakeup_disable(gpio_num_t gpio_num);
#endif
#ifdef __cplusplus
}
#endif

View File

@ -72,6 +72,24 @@ extern "C"
#define SPICOMMON_BUSFLAG_NATIVE_PINS SPICOMMON_BUSFLAG_IOMUX_PINS
/**
* @brief SPI DMA channels
*/
typedef enum {
SPI_DMA_DISABLED = 0, ///< Do not enable DMA for SPI
#if CONFIG_IDF_TARGET_ESP32
SPI_DMA_CH1 = 1, ///< Enable DMA, select DMA Channel 1
SPI_DMA_CH2 = 2, ///< Enable DMA, select DMA Channel 2
#endif
SPI_DMA_CH_AUTO = 3, ///< Enable DMA, channel is automatically selected by driver
} spi_common_dma_t;
#if __cplusplus
/* Needed for C++ backwards compatibility with earlier ESP-IDF where this argument is a bare 'int'. Can be removed in ESP-IDF 5 */
typedef int spi_dma_chan_t;
#else
typedef spi_common_dma_t spi_dma_chan_t;
#endif
/**
* @brief This is a configuration structure for a SPI bus.
@ -103,13 +121,12 @@ typedef struct {
*
* @warning For now, only supports HSPI and VSPI.
*
* @param host_id SPI peripheral that controls this bus
* @param bus_config Pointer to a spi_bus_config_t struct specifying how the host should be initialized
* @param dma_chan Either channel 1 or 2, or 0 in the case when no DMA is required. Selecting a DMA channel
* for a SPI bus allows transfers on the bus to have sizes only limited by the amount of
* internal memory. Selecting no DMA channel (by passing the value 0) limits the amount of
* bytes transfered to a maximum of 64. Set to 0 if only the SPI flash uses
* this bus.
* @param host_id SPI peripheral that controls this bus
* @param bus_config Pointer to a spi_bus_config_t struct specifying how the host should be initialized
* @param dma_chan - Selecting a DMA channel for an SPI bus allows transactions on the bus with size only limited by the amount of internal memory.
* - Selecting SPI_DMA_DISABLED limits the size of transactions.
* - Set to SPI_DMA_DISABLED if only the SPI flash uses this bus.
* - Set to SPI_DMA_CH_AUTO to let the driver to allocate the DMA channel.
*
* @warning If a DMA channel is selected, any transmit and receive buffer used should be allocated in
* DMA-capable memory.
@ -121,10 +138,11 @@ typedef struct {
* @return
* - ESP_ERR_INVALID_ARG if configuration is invalid
* - ESP_ERR_INVALID_STATE if host already is in use
* - ESP_ERR_NOT_FOUND if there is no available DMA channel
* - ESP_ERR_NO_MEM if out of memory
* - ESP_OK on success
*/
esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *bus_config, int dma_chan);
esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *bus_config, spi_dma_chan_t dma_chan);
/**
* @brief Free a SPI bus

View File

@ -65,7 +65,9 @@ typedef struct {
spi_bus_config_t bus_cfg; ///< Config used to initialize the bus
uint32_t flags; ///< Flags (attributes) of the bus
int max_transfer_sz; ///< Maximum length of bytes available to send
int dma_chan; ///< DMA channel used
bool dma_enabled; ///< To enable DMA or not
int tx_dma_chan; ///< TX DMA channel, on ESP32 and ESP32S2, tx_dma_chan and rx_dma_chan are same
int rx_dma_chan; ///< RX DMA channel, on ESP32 and ESP32S2, tx_dma_chan and rx_dma_chan are same
int dma_desc_num; ///< DMA descriptor number of dmadesc_tx or dmadesc_rx.
lldesc_t *dmadesc_tx; ///< DMA descriptor array for TX
lldesc_t *dmadesc_rx; ///< DMA descriptor array for RX
@ -116,47 +118,29 @@ bool spicommon_periph_in_use(spi_host_device_t host);
bool spicommon_periph_free(spi_host_device_t host);
/**
* @brief Try to claim a SPI DMA channel
* @brief Alloc DMA for SPI Slave
*
* Call this if your driver wants to use SPI with a DMA channnel.
* @param host_id SPI host ID
* @param dma_chan DMA channel to be used
* @param[out] out_actual_tx_dma_chan Actual TX DMA channel (if you choose to assign a specific DMA channel, this will be the channel you assigned before)
* @param[out] out_actual_rx_dma_chan Actual RX DMA channel (if you choose to assign a specific DMA channel, this will be the channel you assigned before)
*
* @param dma_chan channel to claim
*
* @note This public API is deprecated.
*
* @return True if success; false otherwise.
* @return
* - ESP_OK: On success
* - ESP_ERR_NO_MEM: No enough memory
* - ESP_ERR_NOT_FOUND: There is no available DMA channel
*/
bool spicommon_dma_chan_claim(int dma_chan);
esp_err_t spicommon_slave_dma_chan_alloc(spi_host_device_t host_id, spi_dma_chan_t dma_chan, uint32_t *out_actual_tx_dma_chan, uint32_t *out_actual_rx_dma_chan);
/**
* @brief Check whether the spi DMA channel is in use.
* @brief Free DMA for SPI Slave
*
* @param dma_chan DMA channel to check.
* @param host_id SPI host ID
*
* @note This public API is deprecated.
*
* @return True if in use, otherwise false.
* @return
* - ESP_OK: On success
*/
bool spicommon_dma_chan_in_use(int dma_chan);
/**
* @brief Return the SPI DMA channel so other driver can claim it, or just to power down DMA.
*
* @param dma_chan channel to return
*
* @note This public API is deprecated.
*
* @return True if success; false otherwise.
*/
bool spicommon_dma_chan_free(int dma_chan);
/**
* @brief Connect SPI and DMA peripherals
*
* @param host SPI peripheral
* @param dma_chan DMA channel
*/
void spicommon_connect_spi_and_dma(spi_host_device_t host, int dma_chan);
esp_err_t spicommon_slave_free_dma(spi_host_device_t host_id);
/**
* @brief Connect a SPI peripheral to GPIO pins
@ -170,7 +154,6 @@ void spicommon_connect_spi_and_dma(spi_host_device_t host, int dma_chan);
*
* @param host SPI peripheral to be routed
* @param bus_config Pointer to a spi_bus_config struct detailing the GPIO pins
* @param dma_chan DMA-channel (1 or 2) to use, or 0 for no DMA.
* @param flags Combination of SPICOMMON_BUSFLAG_* flags, set to ensure the pins set are capable with some functions:
* - ``SPICOMMON_BUSFLAG_MASTER``: Initialize I/O in master mode
* - ``SPICOMMON_BUSFLAG_SLAVE``: Initialize I/O in slave mode
@ -192,7 +175,7 @@ void spicommon_connect_spi_and_dma(spi_host_device_t host, int dma_chan);
* - ESP_ERR_INVALID_ARG if parameter is invalid
* - ESP_OK on success
*/
esp_err_t spicommon_bus_initialize_io(spi_host_device_t host, const spi_bus_config_t *bus_config, int dma_chan, uint32_t flags, uint32_t *flags_o);
esp_err_t spicommon_bus_initialize_io(spi_host_device_t host, const spi_bus_config_t *bus_config, uint32_t flags, uint32_t *flags_o);
/**
* @brief Free the IO used by a SPI peripheral

View File

@ -90,12 +90,13 @@ struct spi_slave_transaction_t {
*
* @warning For now, only supports HSPI and VSPI.
*
* @param host SPI peripheral to use as a SPI slave interface
* @param bus_config Pointer to a spi_bus_config_t struct specifying how the host should be initialized
* @param slave_config Pointer to a spi_slave_interface_config_t struct specifying the details for the slave interface
* @param dma_chan Either 1 or 2. A SPI bus used by this driver must have a DMA channel associated with
* it. The SPI hardware has two DMA channels to share. This parameter indicates which
* one to use.
* @param host SPI peripheral to use as a SPI slave interface
* @param bus_config Pointer to a spi_bus_config_t struct specifying how the host should be initialized
* @param slave_config Pointer to a spi_slave_interface_config_t struct specifying the details for the slave interface
* @param dma_chan - Selecting a DMA channel for an SPI bus allows transactions on the bus with size only limited by the amount of internal memory.
* - Selecting SPI_DMA_DISABLED limits the size of transactions.
* - Set to SPI_DMA_DISABLED if only the SPI flash uses this bus.
* - Set to SPI_DMA_CH_AUTO to let the driver to allocate the DMA channel.
*
* @warning If a DMA channel is selected, any transmit and receive buffer used should be allocated in
* DMA-capable memory.
@ -107,10 +108,11 @@ struct spi_slave_transaction_t {
* @return
* - ESP_ERR_INVALID_ARG if configuration is invalid
* - ESP_ERR_INVALID_STATE if host already is in use
* - ESP_ERR_NOT_FOUND if there is no available DMA channel
* - ESP_ERR_NO_MEM if out of memory
* - ESP_OK on success
*/
esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *bus_config, const spi_slave_interface_config_t *slave_config, int dma_chan);
esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *bus_config, const spi_slave_interface_config_t *slave_config, spi_dma_chan_t dma_chan);
/**
* @brief Free a SPI bus claimed as a SPI slave interface

View File

@ -86,7 +86,7 @@ typedef struct {
uint32_t address_bits; ///< address field bits, multiples of 8 and at least 8.
uint32_t dummy_bits; ///< dummy field bits, multiples of 8 and at least 8.
uint32_t queue_size; ///< Transaction queue size. This sets how many transactions can be 'in the air' (queued using spi_slave_hd_queue_trans but not yet finished using spi_slave_hd_get_trans_result) at the same time
uint32_t dma_chan; ///< DMA channel used
spi_dma_chan_t dma_chan; ///< DMA channel to used.
spi_slave_hd_callback_config_t cb_config; ///< Callback configuration
} spi_slave_hd_slot_config_t;
@ -97,10 +97,11 @@ typedef struct {
* @param bus_config Bus configuration for the bus used
* @param config Configuration for the SPI Slave HD driver
* @return
* - ESP_OK: on success
* - ESP_ERR_INVALID_ARG: invalid argument given
* - ESP_OK: on success
* - ESP_ERR_INVALID_ARG: invalid argument given
* - ESP_ERR_INVALID_STATE: function called in invalid state, may be some resources are already in use
* - ESP_ERR_NO_MEM: memory allocation failed
* - ESP_ERR_NOT_FOUND if there is no available DMA channel
* - ESP_ERR_NO_MEM: memory allocation failed
* - or other return value from `esp_intr_alloc`
*/
esp_err_t spi_slave_hd_init(spi_host_device_t host_id, const spi_bus_config_t *bus_config,

View File

@ -21,7 +21,7 @@
#include "driver/rtc_io.h"
#include "hal/rtc_io_hal.h"
static const char *RTCIO_TAG = "RTCIO";
static const char __attribute__((__unused__)) *RTCIO_TAG = "RTCIO";
#define RTCIO_CHECK(a, str, ret_val) ({ \
if (!(a)) { \
@ -164,29 +164,19 @@ esp_err_t rtc_gpio_pulldown_dis(gpio_num_t gpio_num)
esp_err_t rtc_gpio_hold_en(gpio_num_t gpio_num)
{
#ifdef CONFIG_IDF_TARGET_ESP32C3 // should use HAL here, TODO ESP32-C3 IDF-2511
RTCIO_CHECK(gpio_num <= GPIO_NUM_5, "RTCIO number error", ESP_ERR_INVALID_ARG);
REG_SET_BIT(RTC_CNTL_PAD_HOLD_REG, BIT(gpio_num));
#else
RTCIO_CHECK(rtc_gpio_is_valid_gpio(gpio_num), "RTCIO number error", ESP_ERR_INVALID_ARG);
RTCIO_ENTER_CRITICAL();
rtcio_hal_hold_enable(rtc_io_number_get(gpio_num));
RTCIO_EXIT_CRITICAL();
#endif
return ESP_OK;
}
esp_err_t rtc_gpio_hold_dis(gpio_num_t gpio_num)
{
#ifdef CONFIG_IDF_TARGET_ESP32C3 // should use HAL here, TODO ESP32-C3 IDF-2511
RTCIO_CHECK(gpio_num <= GPIO_NUM_5, "RTCIO number error", ESP_ERR_INVALID_ARG);
REG_CLR_BIT(RTC_CNTL_PAD_HOLD_REG, BIT(gpio_num));
#else
RTCIO_CHECK(rtc_gpio_is_valid_gpio(gpio_num), "RTCIO number error", ESP_ERR_INVALID_ARG);
RTCIO_ENTER_CRITICAL();
rtcio_hal_hold_disable(rtc_io_number_get(gpio_num));
RTCIO_EXIT_CRITICAL();
#endif
return ESP_OK;
}
@ -224,16 +214,6 @@ esp_err_t rtc_gpio_force_hold_dis_all(void)
esp_err_t rtc_gpio_wakeup_enable(gpio_num_t gpio_num, gpio_int_type_t intr_type)
{
#ifdef CONFIG_IDF_TARGET_ESP32C3 // should use HAL here, TODO ESP32-C3 IDF-2511
RTCIO_CHECK(gpio_num <= GPIO_NUM_5, "RTCIO number error", ESP_ERR_INVALID_ARG);
REG_SET_BIT(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_PIN0_WAKEUP_ENABLE_M >> gpio_num);
uint32_t reg = REG_READ(RTC_CNTL_GPIO_WAKEUP_REG);
reg &= (~(RTC_CNTL_GPIO_PIN0_INT_TYPE_V << (RTC_CNTL_GPIO_PIN0_INT_TYPE_S - gpio_num * 3)));
reg |= (intr_type << (RTC_CNTL_GPIO_PIN0_INT_TYPE_S - gpio_num * 3));
REG_WRITE(RTC_CNTL_GPIO_WAKEUP_REG, reg);
ESP_LOGD(RTCIO_TAG, "gpio wake up 0x%08x", REG_READ(RTC_CNTL_GPIO_WAKEUP_REG));
#else
RTCIO_CHECK(rtc_gpio_is_valid_gpio(gpio_num), "RTCIO number error", ESP_ERR_INVALID_ARG);
if (intr_type == GPIO_INTR_POSEDGE || intr_type == GPIO_INTR_NEGEDGE || intr_type == GPIO_INTR_ANYEDGE) {
return ESP_ERR_INVALID_ARG; // Dont support this mode.
@ -241,21 +221,15 @@ esp_err_t rtc_gpio_wakeup_enable(gpio_num_t gpio_num, gpio_int_type_t intr_type)
RTCIO_ENTER_CRITICAL();
rtcio_hal_wakeup_enable(rtc_io_number_get(gpio_num), intr_type);
RTCIO_EXIT_CRITICAL();
#endif // CONFIG_IDF_TARGET_ESP32C3
return ESP_OK;
}
esp_err_t rtc_gpio_wakeup_disable(gpio_num_t gpio_num)
{
#ifdef CONFIG_IDF_TARGET_ESP32C3 // should use HAL here, TODO ESP32-C3 IDF-2511
RTCIO_CHECK(gpio_num <= GPIO_NUM_5, "RTCIO number error", ESP_ERR_INVALID_ARG);
REG_CLR_BIT(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_PIN0_WAKEUP_ENABLE_M >> gpio_num);
#else
RTCIO_CHECK(rtc_gpio_is_valid_gpio(gpio_num), "RTCIO number error", ESP_ERR_INVALID_ARG);
RTCIO_ENTER_CRITICAL();
rtcio_hal_wakeup_disable(rtc_io_number_get(gpio_num));
RTCIO_EXIT_CRITICAL();
#endif
return ESP_OK;
}

View File

@ -31,21 +31,11 @@
#include "stdatomic.h"
#include "hal/spi_hal.h"
#include "esp_rom_gpio.h"
#if CONFIG_IDF_TARGET_ESP32
#include "soc/dport_reg.h"
#endif
//This GDMA related part will be introduced by GDMA dedicated APIs in the future. Here we temporarily use macros.
#if SOC_GDMA_SUPPORTED
#include "hal/gdma_ll.h"
#include "soc/gdma_channel.h"
#include "soc/spi_caps.h"
#define spi_dma_set_rx_channel_priority(gdma_chan, priority) gdma_ll_rx_set_priority(&GDMA, gdma_chan, priority);
#define spi_dma_set_tx_channel_priority(gdma_chan, priority) gdma_ll_tx_set_priority(&GDMA, gdma_chan, priority);
#define spi_dma_connect_rx_channel_to_periph(gdma_chan, periph_id) gdma_ll_rx_connect_to_periph(&GDMA, gdma_chan, periph_id);
#define spi_dma_connect_tx_channel_to_periph(gdma_chan, periph_id) gdma_ll_tx_connect_to_periph(&GDMA, gdma_chan, periph_id);
#include "esp_private/gdma.h"
#endif
static const char *SPI_TAG = "spi";
@ -63,44 +53,61 @@ static const char *SPI_TAG = "spi";
SPI_CHECK(GPIO_IS_VALID_GPIO(pin_num), pin_name" not valid", ESP_ERR_INVALID_ARG); \
}
typedef struct spi_device_t spi_device_t;
#define SPI_MAIN_BUS_DEFAULT() { \
.host_id = 0, \
.bus_attr = { \
.tx_dma_chan = 0, \
.rx_dma_chan = 0, \
.max_transfer_sz = SOC_SPI_MAXIMUM_BUFFER_SIZE, \
.dma_desc_num= 0, \
}, \
}
#define FUNC_GPIO PIN_FUNC_GPIO
#define DMA_CHANNEL_ENABLED(dma_chan) (BIT(dma_chan-1))
typedef struct {
int host_id;
spi_destroy_func_t destroy_func;
void* destroy_arg;
spi_bus_attr_t bus_attr;
#if SOC_GDMA_SUPPORTED
gdma_channel_handle_t tx_channel;
gdma_channel_handle_t rx_channel;
#endif
} spicommon_bus_context_t;
#define MAIN_BUS_DEFAULT() { \
.host_id = 0, \
.bus_attr = { \
.dma_chan = 0, \
.max_transfer_sz = SOC_SPI_MAXIMUM_BUFFER_SIZE, \
.dma_desc_num= 0, \
}, \
}
//Periph 1 is 'claimed' by SPI flash code.
static atomic_bool spi_periph_claimed[SOC_SPI_PERIPH_NUM] = { ATOMIC_VAR_INIT(true), ATOMIC_VAR_INIT(false), ATOMIC_VAR_INIT(false),
#if SOC_SPI_PERIPH_NUM >= 4
ATOMIC_VAR_INIT(false),
static atomic_bool spi_periph_claimed[SOC_SPI_PERIPH_NUM] = { ATOMIC_VAR_INIT(true), ATOMIC_VAR_INIT(false),
#if (SOC_SPI_PERIPH_NUM >= 3)
ATOMIC_VAR_INIT(false),
#endif
#if (SOC_SPI_PERIPH_NUM >= 4)
ATOMIC_VAR_INIT(false),
#endif
};
static const char* spi_claiming_func[3] = {NULL, NULL, NULL};
static uint8_t spi_dma_chan_enabled = 0;
static portMUX_TYPE spi_dma_spinlock = portMUX_INITIALIZER_UNLOCKED;
static spicommon_bus_context_t s_mainbus = MAIN_BUS_DEFAULT();
static const char* spi_claiming_func[3] = {NULL, NULL, NULL};
static spicommon_bus_context_t s_mainbus = SPI_MAIN_BUS_DEFAULT();
static spicommon_bus_context_t* bus_ctx[SOC_SPI_PERIPH_NUM] = {&s_mainbus};
#if !SOC_GDMA_SUPPORTED
//Each bit stands for 1 dma channel, BIT(0) should be used for SPI1
static uint8_t spi_dma_chan_enabled = 0;
static portMUX_TYPE spi_dma_spinlock = portMUX_INITIALIZER_UNLOCKED;
#endif //#if !SOC_GDMA_SUPPORTED
static inline bool is_valid_host(spi_host_device_t host)
{
#if (SOC_SPI_PERIPH_NUM == 2)
return host >= SPI1_HOST && host <= SPI2_HOST;
#elif (SOC_SPI_PERIPH_NUM == 3)
return host >= SPI1_HOST && host <= SPI3_HOST;
#endif
}
//----------------------------------------------------------alloc spi periph-------------------------------------------------------//
//Returns true if this peripheral is successfully claimed, false if otherwise.
bool spicommon_periph_claim(spi_host_device_t host, const char* source)
{
@ -139,90 +146,217 @@ int spicommon_irqdma_source_for_host(spi_host_device_t host)
return spi_periph_signal[host].irq_dma;
}
//----------------------------------------------------------alloc dma periph-------------------------------------------------------//
#if !SOC_GDMA_SUPPORTED
static inline periph_module_t get_dma_periph(int dma_chan)
{
assert(dma_chan >= 1 && dma_chan <= SOC_SPI_DMA_CHAN_NUM);
#if CONFIG_IDF_TARGET_ESP32S2
if (dma_chan == 1) {
return PERIPH_SPI2_DMA_MODULE;
} else if (dma_chan==2) {
} else if (dma_chan == 2) {
return PERIPH_SPI3_DMA_MODULE;
} else {
abort();
return -1;
}
#elif CONFIG_IDF_TARGET_ESP32
return PERIPH_SPI_DMA_MODULE;
#elif SOC_GDMA_SUPPORTED
return PERIPH_GDMA_MODULE;
#else
return 0;
#endif
}
bool spicommon_dma_chan_claim(int dma_chan)
static bool spicommon_dma_chan_claim(int dma_chan, uint32_t *out_actual_dma_chan)
{
bool ret = false;
assert(dma_chan >= 1 && dma_chan <= SOC_SPI_DMA_CHAN_NUM);
portENTER_CRITICAL(&spi_dma_spinlock);
if ( !(spi_dma_chan_enabled & DMA_CHANNEL_ENABLED(dma_chan)) ) {
// get the channel only when it's not claimed yet.
spi_dma_chan_enabled |= DMA_CHANNEL_ENABLED(dma_chan);
bool is_used = (BIT(dma_chan) & spi_dma_chan_enabled);
if (!is_used) {
spi_dma_chan_enabled |= BIT(dma_chan);
periph_module_enable(get_dma_periph(dma_chan));
*out_actual_dma_chan = dma_chan;
ret = true;
}
periph_module_enable(get_dma_periph(dma_chan));
portEXIT_CRITICAL(&spi_dma_spinlock);
return ret;
}
bool spicommon_dma_chan_in_use(int dma_chan)
{
assert(dma_chan ==1 || dma_chan == 2);
return spi_dma_chan_enabled & DMA_CHANNEL_ENABLED(dma_chan);
}
bool spicommon_dma_chan_free(int dma_chan)
{
assert( dma_chan == 1 || dma_chan == 2 );
assert( spi_dma_chan_enabled & DMA_CHANNEL_ENABLED(dma_chan) );
portENTER_CRITICAL(&spi_dma_spinlock);
spi_dma_chan_enabled &= ~DMA_CHANNEL_ENABLED(dma_chan);
periph_module_disable(get_dma_periph(dma_chan));
portEXIT_CRITICAL(&spi_dma_spinlock);
return true;
}
void spicommon_connect_spi_and_dma(spi_host_device_t host, int dma_chan)
static void spicommon_connect_spi_and_dma(spi_host_device_t host, int dma_chan)
{
#if CONFIG_IDF_TARGET_ESP32
DPORT_SET_PERI_REG_BITS(DPORT_SPI_DMA_CHAN_SEL_REG, 3, dma_chan, (host * 2));
#elif CONFIG_IDF_TARGET_ESP32S2
//On ESP32S2, each SPI controller has its own DMA channel. So there is no need to connect them.
#elif SOC_GDMA_SUPPORTED
int gdma_chan, periph_id;
if (dma_chan == 1) {
gdma_chan = SOC_GDMA_SPI2_DMA_CHANNEL;
periph_id = SOC_GDMA_TRIG_PERIPH_SPI2;
#ifdef SOC_GDMA_TRIG_PERIPH_SPI3
} else if (dma_chan == 2) {
gdma_chan = SOC_GDMA_SPI3_DMA_CHANNEL;
periph_id = SOC_GDMA_TRIG_PERIPH_SPI3;
#endif
} else {
abort();
}
spi_dma_connect_rx_channel_to_periph(gdma_chan, periph_id);
spi_dma_connect_tx_channel_to_periph(gdma_chan, periph_id);
spi_dma_set_rx_channel_priority(gdma_chan, 1);
spi_dma_set_tx_channel_priority(gdma_chan, 1);
#endif //#elif SOC_GDMA_SUPPORTED
}
static esp_err_t spicommon_dma_chan_alloc(spi_host_device_t host_id, spi_dma_chan_t dma_chan, uint32_t *out_actual_tx_dma_chan, uint32_t *out_actual_rx_dma_chan)
{
assert(is_valid_host(host_id));
#if CONFIG_IDF_TARGET_ESP32
assert(dma_chan > SPI_DMA_DISABLED && dma_chan <= SPI_DMA_CH_AUTO);
#elif CONFIG_IDF_TARGET_ESP32S2
assert(dma_chan == (int)host_id || dma_chan == SPI_DMA_CH_AUTO);
#endif
esp_err_t ret = ESP_OK;
bool success = false;
uint32_t actual_dma_chan = 0;
if (dma_chan == SPI_DMA_CH_AUTO) {
#if CONFIG_IDF_TARGET_ESP32
for (int i = 1; i < SOC_SPI_DMA_CHAN_NUM+1; i++) {
success = spicommon_dma_chan_claim(i, &actual_dma_chan);
if (success) {
break;
}
}
#elif CONFIG_IDF_TARGET_ESP32S2
//On ESP32S2, each SPI controller has its own DMA channel
success = spicommon_dma_chan_claim(host_id, &actual_dma_chan);
#endif //#if CONFIG_IDF_TARGET_XXX
} else {
success = spicommon_dma_chan_claim((int)dma_chan, &actual_dma_chan);
}
//On ESP32 and ESP32S2, actual_tx_dma_chan and actual_rx_dma_chan are always same
*out_actual_tx_dma_chan = actual_dma_chan;
*out_actual_rx_dma_chan = actual_dma_chan;
if (!success) {
SPI_CHECK(false, "no available dma channel", ESP_ERR_NOT_FOUND);
}
spicommon_connect_spi_and_dma(host_id, *out_actual_tx_dma_chan);
return ret;
}
#else //SOC_GDMA_SUPPORTED
static esp_err_t spicommon_dma_chan_alloc(spi_host_device_t host_id, spi_dma_chan_t dma_chan, uint32_t *out_actual_tx_dma_chan, uint32_t *out_actual_rx_dma_chan)
{
assert(is_valid_host(host_id));
assert(dma_chan == SPI_DMA_CH_AUTO);
esp_err_t ret = ESP_OK;
spicommon_bus_context_t *ctx = bus_ctx[host_id];
if (dma_chan == SPI_DMA_CH_AUTO) {
gdma_channel_alloc_config_t tx_alloc_config = {
.flags.reserve_sibling = 1,
.direction = GDMA_CHANNEL_DIRECTION_TX,
};
ret = gdma_new_channel(&tx_alloc_config, &ctx->tx_channel);
if (ret != ESP_OK) {
return ret;
}
gdma_channel_alloc_config_t rx_alloc_config = {
.direction = GDMA_CHANNEL_DIRECTION_RX,
.sibling_chan = ctx->tx_channel,
};
ret = gdma_new_channel(&rx_alloc_config, &ctx->rx_channel);
if (ret != ESP_OK) {
return ret;
}
if (host_id == SPI2_HOST) {
gdma_connect(ctx->rx_channel, GDMA_MAKE_TRIGGER(GDMA_TRIG_PERIPH_SPI, 2));
gdma_connect(ctx->tx_channel, GDMA_MAKE_TRIGGER(GDMA_TRIG_PERIPH_SPI, 2));
}
#if (SOC_SPI_PERIPH_NUM >= 3)
else if (host_id == SPI3_HOST) {
gdma_connect(ctx->rx_channel, GDMA_MAKE_TRIGGER(GDMA_TRIG_PERIPH_SPI, 3));
gdma_connect(ctx->tx_channel, GDMA_MAKE_TRIGGER(GDMA_TRIG_PERIPH_SPI, 3));
}
#endif
gdma_get_channel_id(ctx->tx_channel, (int *)out_actual_tx_dma_chan);
gdma_get_channel_id(ctx->rx_channel, (int *)out_actual_rx_dma_chan);
}
return ret;
}
#endif //#if !SOC_GDMA_SUPPORTED
esp_err_t spicommon_slave_dma_chan_alloc(spi_host_device_t host_id, spi_dma_chan_t dma_chan, uint32_t *out_actual_tx_dma_chan, uint32_t *out_actual_rx_dma_chan)
{
assert(is_valid_host(host_id));
#if CONFIG_IDF_TARGET_ESP32
assert(dma_chan > SPI_DMA_DISABLED && dma_chan <= SPI_DMA_CH_AUTO);
#elif CONFIG_IDF_TARGET_ESP32S2
assert(dma_chan == (int)host_id || dma_chan == SPI_DMA_CH_AUTO);
#endif
esp_err_t ret = ESP_OK;
uint32_t actual_tx_dma_chan = 0;
uint32_t actual_rx_dma_chan = 0;
spicommon_bus_context_t *ctx = (spicommon_bus_context_t *)calloc(1, sizeof(spicommon_bus_context_t));
if (!ctx) {
ret = ESP_ERR_NO_MEM;
goto cleanup;
}
bus_ctx[host_id] = ctx;
ctx->host_id = host_id;
ret = spicommon_dma_chan_alloc(host_id, dma_chan, &actual_tx_dma_chan, &actual_rx_dma_chan);
if (ret != ESP_OK) {
goto cleanup;
}
ctx->bus_attr.tx_dma_chan = actual_tx_dma_chan;
ctx->bus_attr.rx_dma_chan = actual_rx_dma_chan;
*out_actual_tx_dma_chan = actual_tx_dma_chan;
*out_actual_rx_dma_chan = actual_rx_dma_chan;
return ret;
cleanup:
free(ctx);
ctx = NULL;
return ret;
}
//----------------------------------------------------------free dma periph-------------------------------------------------------//
static esp_err_t spicommon_dma_chan_free(spi_host_device_t host_id)
{
assert(is_valid_host(host_id));
spicommon_bus_context_t *ctx = bus_ctx[host_id];
#if !SOC_GDMA_SUPPORTED
//On ESP32S2, each SPI controller has its own DMA channel
int dma_chan = ctx->bus_attr.tx_dma_chan;
assert(spi_dma_chan_enabled & BIT(dma_chan));
portENTER_CRITICAL(&spi_dma_spinlock);
spi_dma_chan_enabled &= ~BIT(dma_chan);
periph_module_disable(get_dma_periph(dma_chan));
portEXIT_CRITICAL(&spi_dma_spinlock);
#else //SOC_GDMA_SUPPORTED
if (ctx->rx_channel) {
gdma_disconnect(ctx->rx_channel);
gdma_del_channel(ctx->rx_channel);
}
if (ctx->tx_channel) {
gdma_disconnect(ctx->tx_channel);
gdma_del_channel(ctx->tx_channel);
}
#endif
return ESP_OK;
}
esp_err_t spicommon_slave_free_dma(spi_host_device_t host_id)
{
assert(is_valid_host(host_id));
esp_err_t ret = spicommon_dma_chan_free(host_id);
free(bus_ctx[host_id]);
bus_ctx[host_id] = NULL;
return ret;
}
//----------------------------------------------------------IO general-------------------------------------------------------//
static bool bus_uses_iomux_pins(spi_host_device_t host, const spi_bus_config_t* bus_config)
{
if (bus_config->sclk_io_num>=0 &&
@ -254,7 +388,7 @@ Do the common stuff to hook up a SPI host to a bus defined by a bunch of GPIO pi
bus config struct and it'll set up the GPIO matrix and enable the device. If a pin is set to non-negative value,
it should be able to be initialized.
*/
esp_err_t spicommon_bus_initialize_io(spi_host_device_t host, const spi_bus_config_t *bus_config, int dma_chan, uint32_t flags, uint32_t* flags_o)
esp_err_t spicommon_bus_initialize_io(spi_host_device_t host, const spi_bus_config_t *bus_config, uint32_t flags, uint32_t* flags_o)
{
uint32_t temp_flag = 0;
@ -480,22 +614,23 @@ spi_bus_lock_handle_t spi_bus_lock_get_by_id(spi_host_device_t host_id)
return bus_ctx[host_id]->bus_attr.lock;
}
static inline bool is_valid_host(spi_host_device_t host)
{
return host >= SPI1_HOST && host <= SPI3_HOST;
}
esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *bus_config, int dma_chan)
//----------------------------------------------------------master bus init-------------------------------------------------------//
esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *bus_config, spi_dma_chan_t dma_chan)
{
esp_err_t err = ESP_OK;
spicommon_bus_context_t *ctx = NULL;
spi_bus_attr_t *bus_attr = NULL;
uint32_t actual_tx_dma_chan = 0;
uint32_t actual_rx_dma_chan = 0;
SPI_CHECK(is_valid_host(host_id), "invalid host_id", ESP_ERR_INVALID_ARG);
SPI_CHECK(bus_ctx[host_id] == NULL, "SPI bus already initialized.", ESP_ERR_INVALID_STATE);
#ifdef CONFIG_IDF_TARGET_ESP32
SPI_CHECK( dma_chan >= 0 && dma_chan <= 2, "invalid dma channel", ESP_ERR_INVALID_ARG );
SPI_CHECK(dma_chan >= SPI_DMA_DISABLED && dma_chan <= SPI_DMA_CH_AUTO, "invalid dma channel", ESP_ERR_INVALID_ARG );
#elif CONFIG_IDF_TARGET_ESP32S2
SPI_CHECK( dma_chan == 0 || dma_chan == host_id, "invalid dma channel", ESP_ERR_INVALID_ARG );
SPI_CHECK( dma_chan == SPI_DMA_DISABLED || dma_chan == (int)host_id || dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel", ESP_ERR_INVALID_ARG );
#elif SOC_GDMA_SUPPORTED
SPI_CHECK( dma_chan == SPI_DMA_DISABLED || dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel, chip only support spi dma channel auto-alloc", ESP_ERR_INVALID_ARG );
#endif
SPI_CHECK((bus_config->intr_flags & (ESP_INTR_FLAG_HIGH|ESP_INTR_FLAG_EDGE|ESP_INTR_FLAG_INTRDISABLED))==0, "intr flag not allowed", ESP_ERR_INVALID_ARG);
#ifndef CONFIG_SPI_MASTER_ISR_IN_IRAM
@ -505,36 +640,27 @@ esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *
bool spi_chan_claimed = spicommon_periph_claim(host_id, "spi master");
SPI_CHECK(spi_chan_claimed, "host_id already in use", ESP_ERR_INVALID_STATE);
if (dma_chan != 0) {
bool dma_chan_claimed = spicommon_dma_chan_claim(dma_chan);
if (!dma_chan_claimed) {
spicommon_periph_free(host_id);
SPI_CHECK(false, "dma channel already in use", ESP_ERR_INVALID_STATE);
}
spicommon_connect_spi_and_dma(host_id, dma_chan);
}
//clean and initialize the context
ctx = (spicommon_bus_context_t*)malloc(sizeof(spicommon_bus_context_t));
ctx = (spicommon_bus_context_t *)calloc(1, sizeof(spicommon_bus_context_t));
if (!ctx) {
err = ESP_ERR_NO_MEM;
goto cleanup;
}
*ctx = (spicommon_bus_context_t) {
.host_id = host_id,
.bus_attr = {
.bus_cfg = *bus_config,
.dma_chan = dma_chan,
},
};
bus_ctx[host_id] = ctx;
ctx->host_id = host_id;
bus_attr = &ctx->bus_attr;
if (dma_chan == 0) {
bus_attr->max_transfer_sz = SOC_SPI_MAXIMUM_BUFFER_SIZE;
bus_attr->dma_desc_num = 0;
} else {
//See how many dma descriptors we need and allocate them
bus_attr->bus_cfg = *bus_config;
if (dma_chan != SPI_DMA_DISABLED) {
bus_attr->dma_enabled = 1;
err = spicommon_dma_chan_alloc(host_id, dma_chan, &actual_tx_dma_chan, &actual_rx_dma_chan);
if (err != ESP_OK) {
goto cleanup;
}
bus_attr->tx_dma_chan = actual_tx_dma_chan;
bus_attr->rx_dma_chan = actual_rx_dma_chan;
int dma_desc_ct = lldesc_get_required_num(bus_config->max_transfer_sz);
if (dma_desc_ct == 0) dma_desc_ct = 1; //default to 4k when max is not given
@ -546,6 +672,10 @@ esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *
goto cleanup;
}
bus_attr->dma_desc_num = dma_desc_ct;
} else {
bus_attr->dma_enabled = 0;
bus_attr->max_transfer_sz = SOC_SPI_MAXIMUM_BUFFER_SIZE;
bus_attr->dma_desc_num = 0;
}
spi_bus_lock_config_t lock_config = {
@ -565,12 +695,11 @@ esp_err_t spi_bus_initialize(spi_host_device_t host_id, const spi_bus_config_t *
}
#endif //CONFIG_PM_ENABLE
err = spicommon_bus_initialize_io(host_id, bus_config, dma_chan, SPICOMMON_BUSFLAG_MASTER | bus_config->flags, &bus_attr->flags);
err = spicommon_bus_initialize_io(host_id, bus_config, SPICOMMON_BUSFLAG_MASTER | bus_config->flags, &bus_attr->flags);
if (err != ESP_OK) {
goto cleanup;
}
bus_ctx[host_id] = ctx;
return ESP_OK;
cleanup:
@ -583,12 +712,15 @@ cleanup:
}
free(bus_attr->dmadesc_tx);
free(bus_attr->dmadesc_rx);
}
free(ctx);
if (dma_chan) {
spicommon_dma_chan_free(dma_chan);
bus_attr->dmadesc_tx = NULL;
bus_attr->dmadesc_rx = NULL;
if (bus_attr->dma_enabled) {
spicommon_dma_chan_free(host_id);
}
}
spicommon_periph_free(host_id);
free(bus_ctx[host_id]);
bus_ctx[host_id] = NULL;
return err;
}
@ -615,15 +747,14 @@ esp_err_t spi_bus_free(spi_host_device_t host_id)
esp_pm_lock_delete(bus_attr->pm_lock);
#endif
spi_bus_deinit_lock(bus_attr->lock);
free(bus_attr->dmadesc_rx);
free(bus_attr->dmadesc_tx);
if (bus_attr->dma_chan > 0) {
spicommon_dma_chan_free (bus_attr->dma_chan);
bus_attr->dmadesc_tx = NULL;
bus_attr->dmadesc_rx = NULL;
if (bus_attr->dma_enabled > 0) {
spicommon_dma_chan_free(host_id);
}
spicommon_periph_free(host_id);
free(ctx);
bus_ctx[host_id] = NULL;
return err;

View File

@ -188,10 +188,12 @@ static esp_err_t spi_master_deinit_driver(void* arg);
static inline bool is_valid_host(spi_host_device_t host)
{
//SPI1 can be used as GPSPI only on ESP32
#if CONFIG_IDF_TARGET_ESP32
return host >= SPI1_HOST && host <= SPI3_HOST;
#else
// SPI_HOST (SPI1_HOST) is not supported by the SPI Master driver on ESP32-S2 and later
#elif (SOC_SPI_PERIPH_NUM == 2)
return host == SPI2_HOST;
#elif (SOC_SPI_PERIPH_NUM == 3)
return host >= SPI2_HOST && host <= SPI3_HOST;
#endif
}
@ -231,17 +233,18 @@ static esp_err_t spi_master_init_driver(spi_host_device_t host_id)
}
//assign the SPI, RX DMA and TX DMA peripheral registers beginning address
spi_hal_dma_config_t hal_dma_config = {
spi_hal_config_t hal_config = {
//On ESP32-S2 and earlier chips, DMA registers are part of SPI registers. Pass the registers of SPI peripheral to control it.
.dma_in = SPI_LL_GET_HW(host_id),
.dma_out = SPI_LL_GET_HW(host_id),
.dma_enabled = bus_attr->dma_enabled,
.dmadesc_tx = bus_attr->dmadesc_tx,
.dmadesc_rx = bus_attr->dmadesc_rx,
.dmadesc_n = bus_attr->dma_desc_num
.tx_dma_chan = bus_attr->tx_dma_chan,
.rx_dma_chan = bus_attr->rx_dma_chan,
.dmadesc_n = bus_attr->dma_desc_num,
};
spi_hal_init(&host->hal, host_id, &hal_dma_config);
host->hal.dma_enabled = (bus_attr->dma_chan != 0);
spi_hal_init(&host->hal, host_id, &hal_config);
if (host_id != SPI1_HOST) {
//SPI1 attributes are already initialized at start up.
@ -606,8 +609,9 @@ static void SPI_MASTER_ISR_ATTR spi_intr(void *arg)
//Okay, transaction is done.
const int cs = host->cur_cs;
//Tell common code DMA workaround that our DMA channel is idle. If needed, the code will do a DMA reset.
if (bus_attr->dma_chan) {
spicommon_dmaworkaround_idle(bus_attr->dma_chan);
if (bus_attr->dma_enabled) {
//This workaround is only for esp32, where tx_dma_chan and rx_dma_chan are always same
spicommon_dmaworkaround_idle(bus_attr->tx_dma_chan);
}
//cur_cs is changed to DEV_NUM_MAX here
@ -658,9 +662,10 @@ static void SPI_MASTER_ISR_ATTR spi_intr(void *arg)
if (trans_found) {
spi_trans_priv_t *const cur_trans_buf = &host->cur_trans_buf;
if (bus_attr->dma_chan != 0 && (cur_trans_buf->buffer_to_rcv || cur_trans_buf->buffer_to_send)) {
if (bus_attr->dma_enabled && (cur_trans_buf->buffer_to_rcv || cur_trans_buf->buffer_to_send)) {
//mark channel as active, so that the DMA will not be reset by the slave
spicommon_dmaworkaround_transfer_active(bus_attr->dma_chan);
//This workaround is only for esp32, where tx_dma_chan and rx_dma_chan are always same
spicommon_dmaworkaround_transfer_active(bus_attr->tx_dma_chan);
}
spi_new_trans(device_to_send, cur_trans_buf);
}
@ -693,7 +698,7 @@ static SPI_MASTER_ISR_ATTR esp_err_t check_trans_valid(spi_device_handle_t handl
SPI_CHECK(!((trans_desc->flags & (SPI_TRANS_MODE_DIO|SPI_TRANS_MODE_QIO)) && (handle->cfg.flags & SPI_DEVICE_3WIRE)), "incompatible iface params", ESP_ERR_INVALID_ARG);
SPI_CHECK(!((trans_desc->flags & (SPI_TRANS_MODE_DIO|SPI_TRANS_MODE_QIO)) && !is_half_duplex), "incompatible iface params", ESP_ERR_INVALID_ARG);
#ifdef CONFIG_IDF_TARGET_ESP32
SPI_CHECK(!is_half_duplex || bus_attr->dma_chan == 0 || !rx_enabled || !tx_enabled, "SPI half duplex mode does not support using DMA with both MOSI and MISO phases.", ESP_ERR_INVALID_ARG );
SPI_CHECK(!is_half_duplex || !bus_attr->dma_enabled || !rx_enabled || !tx_enabled, "SPI half duplex mode does not support using DMA with both MOSI and MISO phases.", ESP_ERR_INVALID_ARG );
#elif CONFIG_IDF_TARGET_ESP32S3
SPI_CHECK(!is_half_duplex || !tx_enabled || !rx_enabled, "SPI half duplex mode is not supported when both MOSI and MISO phases are enabled.", ESP_ERR_INVALID_ARG);
#endif
@ -788,7 +793,7 @@ esp_err_t SPI_MASTER_ATTR spi_device_queue_trans(spi_device_handle_t handle, spi
SPI_CHECK(!spi_bus_device_is_polling(handle), "Cannot queue new transaction while previous polling transaction is not terminated.", ESP_ERR_INVALID_STATE );
spi_trans_priv_t trans_buf;
ret = setup_priv_desc(trans_desc, &trans_buf, (host->bus_attr->dma_chan!=0));
ret = setup_priv_desc(trans_desc, &trans_buf, (host->bus_attr->dma_enabled));
if (ret != ESP_OK) return ret;
#ifdef CONFIG_PM_ENABLE
@ -877,8 +882,9 @@ esp_err_t SPI_MASTER_ISR_ATTR spi_device_acquire_bus(spi_device_t *device, TickT
//configure the device ahead so that we don't need to do it again in the following transactions
spi_setup_device(host->device[device->id]);
//the DMA is also occupied by the device, all the slave devices that using DMA should wait until bus released.
if (host->bus_attr->dma_chan != 0) {
spicommon_dmaworkaround_transfer_active(host->bus_attr->dma_chan);
if (host->bus_attr->dma_enabled) {
//This workaround is only for esp32, where tx_dma_chan and rx_dma_chan are always same
spicommon_dmaworkaround_transfer_active(host->bus_attr->tx_dma_chan);
}
return ESP_OK;
}
@ -893,8 +899,9 @@ void SPI_MASTER_ISR_ATTR spi_device_release_bus(spi_device_t *dev)
assert(0);
}
if (host->bus_attr->dma_chan != 0) {
spicommon_dmaworkaround_idle(host->bus_attr->dma_chan);
if (host->bus_attr->dma_enabled) {
//This workaround is only for esp32, where tx_dma_chan and rx_dma_chan are always same
spicommon_dmaworkaround_idle(host->bus_attr->tx_dma_chan);
}
//Tell common code DMA workaround that our DMA channel is idle. If needed, the code will do a DMA reset.
@ -928,7 +935,7 @@ esp_err_t SPI_MASTER_ISR_ATTR spi_device_polling_start(spi_device_handle_t handl
}
if (ret != ESP_OK) return ret;
ret = setup_priv_desc(trans_desc, &host->cur_trans_buf, (host->bus_attr->dma_chan!=0));
ret = setup_priv_desc(trans_desc, &host->cur_trans_buf, (host->bus_attr->dma_enabled));
if (ret!=ESP_OK) return ret;
//Polling, no interrupt is used.

View File

@ -67,7 +67,9 @@ typedef struct {
int max_transfer_sz;
QueueHandle_t trans_queue;
QueueHandle_t ret_queue;
int dma_chan;
bool dma_enabled;
uint32_t tx_dma_chan;
uint32_t rx_dma_chan;
#ifdef CONFIG_PM_ENABLE
esp_pm_lock_handle_t pm_lock;
#endif
@ -79,10 +81,12 @@ static void IRAM_ATTR spi_intr(void *arg);
static inline bool is_valid_host(spi_host_device_t host)
{
//SPI1 can be used as GPSPI only on ESP32
#if CONFIG_IDF_TARGET_ESP32
return host >= SPI1_HOST && host <= SPI3_HOST;
#else
// SPI_HOST (SPI1_HOST) is not supported by the SPI Slave driver on ESP32-S2 & later
#elif (SOC_SPI_PERIPH_NUM == 2)
return host == SPI2_HOST;
#elif (SOC_SPI_PERIPH_NUM == 3)
return host >= SPI2_HOST && host <= SPI3_HOST;
#endif
}
@ -108,17 +112,21 @@ static inline void restore_cs(spi_slave_t *host)
}
}
esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *bus_config, const spi_slave_interface_config_t *slave_config, int dma_chan)
esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *bus_config, const spi_slave_interface_config_t *slave_config, spi_dma_chan_t dma_chan)
{
bool spi_chan_claimed, dma_chan_claimed;
bool spi_chan_claimed;
uint32_t actual_tx_dma_chan = 0;
uint32_t actual_rx_dma_chan = 0;
esp_err_t ret = ESP_OK;
esp_err_t err;
//We only support HSPI/VSPI, period.
SPI_CHECK(is_valid_host(host), "invalid host", ESP_ERR_INVALID_ARG);
#if defined(CONFIG_IDF_TARGET_ESP32)
SPI_CHECK( dma_chan >= 0 && dma_chan <= 2, "invalid dma channel", ESP_ERR_INVALID_ARG );
#elif defined(CONFIG_IDF_TARGET_ESP32S2)
SPI_CHECK( dma_chan == 0 || dma_chan == host, "invalid dma channel", ESP_ERR_INVALID_ARG );
#ifdef CONFIG_IDF_TARGET_ESP32
SPI_CHECK(dma_chan >= SPI_DMA_DISABLED && dma_chan <= SPI_DMA_CH_AUTO, "invalid dma channel", ESP_ERR_INVALID_ARG );
#elif CONFIG_IDF_TARGET_ESP32S2
SPI_CHECK( dma_chan == SPI_DMA_DISABLED || dma_chan == (int)host || dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel", ESP_ERR_INVALID_ARG );
#elif SOC_GDMA_SUPPORTED
SPI_CHECK( dma_chan == SPI_DMA_DISABLED || dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel, chip only support spi dma channel auto-alloc", ESP_ERR_INVALID_ARG );
#endif
SPI_CHECK((bus_config->intr_flags & (ESP_INTR_FLAG_HIGH|ESP_INTR_FLAG_EDGE|ESP_INTR_FLAG_INTRDISABLED))==0, "intr flag not allowed", ESP_ERR_INVALID_ARG);
#ifndef CONFIG_SPI_SLAVE_ISR_IN_IRAM
@ -129,17 +137,6 @@ esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *b
spi_chan_claimed=spicommon_periph_claim(host, "spi slave");
SPI_CHECK(spi_chan_claimed, "host already in use", ESP_ERR_INVALID_STATE);
bool use_dma = dma_chan != 0;
if (use_dma) {
dma_chan_claimed=spicommon_dma_chan_claim(dma_chan);
if ( !dma_chan_claimed ) {
spicommon_periph_free( host );
SPI_CHECK(dma_chan_claimed, "dma channel already in use", ESP_ERR_INVALID_STATE);
}
spicommon_connect_spi_and_dma(host, dma_chan);
}
spihost[host] = malloc(sizeof(spi_slave_t));
if (spihost[host] == NULL) {
ret = ESP_ERR_NO_MEM;
@ -149,7 +146,16 @@ esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *b
memcpy(&spihost[host]->cfg, slave_config, sizeof(spi_slave_interface_config_t));
spihost[host]->id = host;
err = spicommon_bus_initialize_io(host, bus_config, dma_chan, SPICOMMON_BUSFLAG_SLAVE|bus_config->flags, &spihost[host]->flags);
bool use_dma = (dma_chan != SPI_DMA_DISABLED);
spihost[host]->dma_enabled = use_dma;
if (use_dma) {
ret = spicommon_slave_dma_chan_alloc(host, dma_chan, &actual_tx_dma_chan, &actual_rx_dma_chan);
if (ret != ESP_OK) {
goto cleanup;
}
}
err = spicommon_bus_initialize_io(host, bus_config, SPICOMMON_BUSFLAG_SLAVE|bus_config->flags, &spihost[host]->flags);
if (err!=ESP_OK) {
ret = err;
goto cleanup;
@ -162,7 +168,8 @@ esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *b
if (use_dma) freeze_cs(spihost[host]);
int dma_desc_ct = 0;
spihost[host]->dma_chan = dma_chan;
spihost[host]->tx_dma_chan = actual_tx_dma_chan;
spihost[host]->rx_dma_chan = actual_rx_dma_chan;
if (use_dma) {
//See how many dma descriptors we need and allocate them
dma_desc_ct = (bus_config->max_transfer_sz + SPI_MAX_DMA_LEN - 1) / SPI_MAX_DMA_LEN;
@ -220,6 +227,8 @@ esp_err_t spi_slave_initialize(spi_host_device_t host, const spi_bus_config_t *b
hal->tx_lsbfirst = (slave_config->flags & SPI_SLAVE_TXBIT_LSBFIRST) ? 1 : 0;
hal->mode = slave_config->mode;
hal->use_dma = use_dma;
hal->tx_dma_chan = actual_tx_dma_chan;
hal->rx_dma_chan = actual_rx_dma_chan;
spi_slave_hal_setup_device(hal);
@ -239,10 +248,14 @@ cleanup:
#endif
}
spi_slave_hal_deinit(&spihost[host]->hal);
if (spihost[host]->dma_enabled) {
spicommon_slave_free_dma(host);
}
free(spihost[host]);
spihost[host] = NULL;
spicommon_periph_free(host);
if (dma_chan != 0) spicommon_dma_chan_free(dma_chan);
return ret;
}
@ -252,8 +265,8 @@ esp_err_t spi_slave_free(spi_host_device_t host)
SPI_CHECK(spihost[host], "host not slave", ESP_ERR_INVALID_ARG);
if (spihost[host]->trans_queue) vQueueDelete(spihost[host]->trans_queue);
if (spihost[host]->ret_queue) vQueueDelete(spihost[host]->ret_queue);
if ( spihost[host]->dma_chan > 0 ) {
spicommon_dma_chan_free ( spihost[host]->dma_chan );
if (spihost[host]->dma_enabled) {
spicommon_slave_free_dma(host);
}
free(spihost[host]->hal.dmadesc_tx);
free(spihost[host]->hal.dmadesc_rx);
@ -274,9 +287,9 @@ esp_err_t SPI_SLAVE_ATTR spi_slave_queue_trans(spi_host_device_t host, const spi
BaseType_t r;
SPI_CHECK(is_valid_host(host), "invalid host", ESP_ERR_INVALID_ARG);
SPI_CHECK(spihost[host], "host not slave", ESP_ERR_INVALID_ARG);
SPI_CHECK(spihost[host]->dma_chan == 0 || trans_desc->tx_buffer==NULL || esp_ptr_dma_capable(trans_desc->tx_buffer),
SPI_CHECK(spihost[host]->dma_enabled == 0 || trans_desc->tx_buffer==NULL || esp_ptr_dma_capable(trans_desc->tx_buffer),
"txdata not in DMA-capable memory", ESP_ERR_INVALID_ARG);
SPI_CHECK(spihost[host]->dma_chan == 0 || trans_desc->rx_buffer==NULL ||
SPI_CHECK(spihost[host]->dma_enabled == 0 || trans_desc->rx_buffer==NULL ||
(esp_ptr_dma_capable(trans_desc->rx_buffer) && esp_ptr_word_aligned(trans_desc->rx_buffer) &&
(trans_desc->length%4==0)),
"rxdata not in DMA-capable memory or not WORD aligned", ESP_ERR_INVALID_ARG);
@ -332,7 +345,7 @@ static void SPI_SLAVE_ISR_ATTR spi_intr(void *arg)
assert(spi_slave_hal_usr_is_done(hal));
bool use_dma = host->dma_chan != 0;
bool use_dma = host->dma_enabled;
if (host->cur_trans) {
// When DMA is enabled, the slave rx dma suffers from unexpected transactions. Forbid reading until transaction ready.
if (use_dma) freeze_cs(host);
@ -341,7 +354,8 @@ static void SPI_SLAVE_ISR_ATTR spi_intr(void *arg)
host->cur_trans->trans_len = spi_slave_hal_get_rcv_bitlen(hal);
if (spi_slave_hal_dma_need_reset(hal)) {
spicommon_dmaworkaround_req_reset(host->dma_chan, spi_slave_restart_after_dmareset, host);
//On ESP32 and ESP32S2, actual_tx_dma_chan and actual_rx_dma_chan are always same
spicommon_dmaworkaround_req_reset(host->tx_dma_chan, spi_slave_restart_after_dmareset, host);
}
if (host->cfg.post_trans_cb) host->cfg.post_trans_cb(host->cur_trans);
//Okay, transaction is done.
@ -350,7 +364,8 @@ static void SPI_SLAVE_ISR_ATTR spi_intr(void *arg)
host->cur_trans = NULL;
}
if (use_dma) {
spicommon_dmaworkaround_idle(host->dma_chan);
//On ESP32 and ESP32S2, actual_tx_dma_chan and actual_rx_dma_chan are always same
spicommon_dmaworkaround_idle(host->tx_dma_chan);
if (spicommon_dmaworkaround_reset_in_progress()) {
//We need to wait for the reset to complete. Disable int (will be re-enabled on reset callback) and exit isr.
esp_intr_disable(host->intr);
@ -375,7 +390,8 @@ static void SPI_SLAVE_ISR_ATTR spi_intr(void *arg)
hal->tx_buffer = trans->tx_buffer;
if (use_dma) {
spicommon_dmaworkaround_transfer_active(host->dma_chan);
//On ESP32 and ESP32S2, actual_tx_dma_chan and actual_rx_dma_chan are always same
spicommon_dmaworkaround_transfer_active(host->tx_dma_chan);
}
spi_slave_hal_prepare_data(hal);

View File

@ -23,12 +23,15 @@
#include "hal/spi_slave_hd_hal.h"
//SPI1 can never be used as the slave
#define VALID_HOST(x) (x>SPI_HOST && x<=HSPI_HOST)
#if (SOC_SPI_PERIPH_NUM == 2)
#define VALID_HOST(x) ((x) == SPI2_HOST)
#elif (SOC_SPI_PERIPH_NUM == 3)
#define VALID_HOST(x) ((x) >= SPI2_HOST && (x) <= SPI3_HOST)
#endif
#define SPIHD_CHECK(cond,warn,ret) do{if(!(cond)){ESP_LOGE(TAG, warn); return ret;}} while(0)
typedef struct {
int dma_chan;
bool dma_enabled;
int max_transfer_sz;
uint32_t flags;
portMUX_TYPE int_spinlock;
@ -64,12 +67,18 @@ static void spi_slave_hd_intr_append(void *arg);
esp_err_t spi_slave_hd_init(spi_host_device_t host_id, const spi_bus_config_t *bus_config,
const spi_slave_hd_slot_config_t *config)
{
bool spi_chan_claimed, dma_chan_claimed;
bool spi_chan_claimed;
bool append_mode = (config->flags & SPI_SLAVE_HD_APPEND_MODE);
uint32_t actual_tx_dma_chan = 0;
uint32_t actual_rx_dma_chan = 0;
esp_err_t ret = ESP_OK;
SPIHD_CHECK(VALID_HOST(host_id), "invalid host", ESP_ERR_INVALID_ARG);
SPIHD_CHECK(config->dma_chan == 0 || config->dma_chan == host_id, "invalid dma channel", ESP_ERR_INVALID_ARG);
#if CONFIG_IDF_TARGET_ESP32S2
SPIHD_CHECK(config->dma_chan == SPI_DMA_DISABLED || config->dma_chan == (int)host_id || config->dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel", ESP_ERR_INVALID_ARG);
#elif SOC_GDMA_SUPPORTED
SPIHD_CHECK(config->dma_chan == SPI_DMA_DISABLED || config->dma_chan == SPI_DMA_CH_AUTO, "invalid dma channel, chip only support spi dma channel auto-alloc", ESP_ERR_INVALID_ARG);
#endif
#if !CONFIG_IDF_TARGET_ESP32S2
//Append mode is only supported on ESP32S2 now
SPIHD_CHECK(append_mode == 0, "Append mode is only supported on ESP32S2 now", ESP_ERR_INVALID_ARG);
@ -78,29 +87,23 @@ esp_err_t spi_slave_hd_init(spi_host_device_t host_id, const spi_bus_config_t *b
spi_chan_claimed = spicommon_periph_claim(host_id, "slave_hd");
SPIHD_CHECK(spi_chan_claimed, "host already in use", ESP_ERR_INVALID_STATE);
if ( config->dma_chan != 0 ) {
dma_chan_claimed = spicommon_dma_chan_claim(config->dma_chan);
if (!dma_chan_claimed) {
spicommon_periph_free(host_id);
SPIHD_CHECK(dma_chan_claimed, "dma channel already in use", ESP_ERR_INVALID_STATE);
}
spicommon_connect_spi_and_dma(host_id, config->dma_chan);
}
spi_slave_hd_slot_t* host = malloc(sizeof(spi_slave_hd_slot_t));
spi_slave_hd_slot_t* host = calloc(1, sizeof(spi_slave_hd_slot_t));
if (host == NULL) {
ret = ESP_ERR_NO_MEM;
goto cleanup;
}
spihost[host_id] = host;
memset(host, 0, sizeof(spi_slave_hd_slot_t));
host->dma_chan = config->dma_chan;
host->int_spinlock = (portMUX_TYPE)portMUX_INITIALIZER_UNLOCKED;
host->dma_enabled = (config->dma_chan != SPI_DMA_DISABLED);
ret = spicommon_bus_initialize_io(host_id, bus_config, config->dma_chan,
SPICOMMON_BUSFLAG_SLAVE | bus_config->flags, &host->flags);
if (host->dma_enabled) {
ret = spicommon_slave_dma_chan_alloc(host_id, config->dma_chan, &actual_tx_dma_chan, &actual_rx_dma_chan);
if (ret != ESP_OK) {
goto cleanup;
}
}
ret = spicommon_bus_initialize_io(host_id, bus_config, SPICOMMON_BUSFLAG_SLAVE | bus_config->flags, &host->flags);
if (ret != ESP_OK) {
goto cleanup;
}
@ -113,14 +116,16 @@ esp_err_t spi_slave_hd_init(spi_host_device_t host_id, const spi_bus_config_t *b
.host_id = host_id,
.dma_in = SPI_LL_GET_HW(host_id),
.dma_out = SPI_LL_GET_HW(host_id),
.dma_chan = config->dma_chan,
.dma_enabled = host->dma_enabled,
.tx_dma_chan = actual_tx_dma_chan,
.rx_dma_chan = actual_rx_dma_chan,
.append_mode = append_mode,
.mode = config->mode,
.tx_lsbfirst = (config->flags & SPI_SLAVE_HD_RXBIT_LSBFIRST),
.rx_lsbfirst = (config->flags & SPI_SLAVE_HD_TXBIT_LSBFIRST),
};
if (config->dma_chan != 0) {
if (host->dma_enabled) {
//Malloc for all the DMA descriptors
uint32_t total_desc_size = spi_slave_hd_hal_get_total_desc_size(&host->hal, bus_config->max_transfer_sz);
host->hal.dmadesc_tx = heap_caps_malloc(total_desc_size, MALLOC_CAP_DMA);
@ -243,8 +248,8 @@ esp_err_t spi_slave_hd_deinit(spi_host_device_t host_id)
}
spicommon_periph_free(host_id);
if (host->dma_chan) {
spicommon_dma_chan_free(host->dma_chan);
if (host->dma_enabled) {
spicommon_slave_free_dma(host_id);
}
free(host);
spihost[host_id] = NULL;
@ -392,9 +397,7 @@ static IRAM_ATTR void spi_slave_hd_intr_append(void *arg)
spi_slave_hd_data_t *trans_desc;
while (1) {
bool trans_finish = false;
portENTER_CRITICAL_ISR(&host->int_spinlock);
trans_finish = spi_slave_hd_hal_get_tx_finished_trans(hal, (void **)&trans_desc);
portEXIT_CRITICAL_ISR(&host->int_spinlock);
if (!trans_finish) {
break;
}
@ -425,9 +428,7 @@ static IRAM_ATTR void spi_slave_hd_intr_append(void *arg)
size_t trans_len;
while (1) {
bool trans_finish = false;
portENTER_CRITICAL_ISR(&host->int_spinlock);
trans_finish = spi_slave_hd_hal_get_rx_finished_trans(hal, (void **)&trans_desc, &trans_len);
portEXIT_CRITICAL_ISR(&host->int_spinlock);
if (!trans_finish) {
break;
}
@ -546,17 +547,13 @@ esp_err_t spi_slave_hd_append_trans(spi_host_device_t host_id, spi_slave_chan_t
if (ret == pdFALSE) {
return ESP_ERR_TIMEOUT;
}
portENTER_CRITICAL(&host->int_spinlock);
err = spi_slave_hd_hal_txdma_append(hal, trans->data, trans->len, trans);
portEXIT_CRITICAL(&host->int_spinlock);
} else {
BaseType_t ret = xSemaphoreTake(host->rx_cnting_sem, timeout);
if (ret == pdFALSE) {
return ESP_ERR_TIMEOUT;
}
portENTER_CRITICAL(&host->int_spinlock);
err = spi_slave_hd_hal_rxdma_append(hal, trans->data, trans->len, trans);
portEXIT_CRITICAL(&host->int_spinlock);
}
if (err != ESP_OK) {
ESP_LOGE(TAG, "Wait until the DMA finishes its transaction");

View File

@ -122,7 +122,6 @@ static void continuous_adc_init(uint16_t adc1_chan_mask, uint16_t adc2_chan_mask
adc_digi_init_config_t adc_dma_config = {
.max_store_buf_size = TEST_COUNT*2,
.conv_num_each_intr = 128,
.dma_chan = SOC_GDMA_ADC_DMA_CHANNEL,
.adc1_chan_mask = adc1_chan_mask,
.adc2_chan_mask = adc2_chan_mask,
};

View File

@ -74,7 +74,7 @@ TEST_CASE("SPI Master clockdiv calculation routines", "[spi]")
.quadhd_io_num=-1
};
esp_err_t ret;
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, 1);
ret = spi_bus_initialize(TEST_SPI_HOST, &buscfg, SPI_DMA_CH_AUTO);
TEST_ASSERT(ret==ESP_OK);
check_spi_pre_n_for(26000000, 1, 3);
@ -110,13 +110,10 @@ static spi_device_handle_t setup_spi_bus_loopback(int clkspeed, bool dma) {
.spics_io_num=PIN_NUM_CS,
.queue_size=3,
};
esp_err_t ret;
spi_device_handle_t handle;
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, dma?1:0);
TEST_ASSERT(ret==ESP_OK);
ret=spi_bus_add_device(TEST_SPI_HOST, &devcfg, &handle);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, dma ? SPI_DMA_CH_AUTO : 0));
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devcfg, &handle));
//connect MOSI to two devices breaks the output, fix it.
spitest_gpio_output_sel(PIN_NUM_MOSI, FUNC_GPIO, spi_periph_signal[TEST_SPI_HOST].spid_out);
printf("Bus/dev inited.\n");
@ -276,8 +273,10 @@ static esp_err_t test_master_pins(int mosi, int miso, int sclk, int cs)
spi_device_interface_config_t master_cfg = SPI_DEVICE_TEST_DEFAULT_CONFIG();
master_cfg.spics_io_num = cs;
ret = spi_bus_initialize(TEST_SPI_HOST, &cfg, 1);
if (ret != ESP_OK) return ret;
ret = spi_bus_initialize(TEST_SPI_HOST, &cfg, SPI_DMA_CH_AUTO);
if (ret != ESP_OK) {
return ret;
}
spi_device_handle_t spi;
ret = spi_bus_add_device(TEST_SPI_HOST, &master_cfg, &spi);
@ -301,8 +300,10 @@ static esp_err_t test_slave_pins(int mosi, int miso, int sclk, int cs)
spi_slave_interface_config_t slave_cfg = SPI_SLAVE_TEST_DEFAULT_CONFIG();
slave_cfg.spics_io_num = cs;
ret = spi_slave_initialize(TEST_SLAVE_HOST, &cfg, &slave_cfg, TEST_DMA_CHAN_SLAVE);
if (ret != ESP_OK) return ret;
ret = spi_slave_initialize(TEST_SLAVE_HOST, &cfg, &slave_cfg, SPI_DMA_CH_AUTO);
if (ret != ESP_OK) {
return ret;
}
spi_slave_free(TEST_SLAVE_HOST);
return ESP_OK;
@ -315,7 +316,6 @@ TEST_CASE("spi placed on input-only pins", "[spi]")
TEST_ESP_OK(test_master_pins(PIN_NUM_MOSI, INPUT_ONLY_PIN, PIN_NUM_CLK, PIN_NUM_CS));
TEST_ASSERT(test_master_pins(PIN_NUM_MOSI, PIN_NUM_MISO, INPUT_ONLY_PIN, PIN_NUM_CS) != ESP_OK);
TEST_ASSERT(test_master_pins(PIN_NUM_MOSI, PIN_NUM_MISO, PIN_NUM_CLK, INPUT_ONLY_PIN) != ESP_OK);
TEST_ESP_OK(test_slave_pins(PIN_NUM_MOSI, PIN_NUM_MISO, PIN_NUM_CLK, PIN_NUM_CS));
TEST_ESP_OK(test_slave_pins(INPUT_ONLY_PIN, PIN_NUM_MISO, PIN_NUM_CLK, PIN_NUM_CS));
TEST_ASSERT(test_slave_pins(PIN_NUM_MOSI, INPUT_ONLY_PIN, PIN_NUM_CLK, PIN_NUM_CS) != ESP_OK);
@ -336,18 +336,18 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
flags_expected = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_IOMUX_PINS | SPICOMMON_BUSFLAG_QUAD;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test 4 iomux output pins...");
flags_expected = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_IOMUX_PINS | SPICOMMON_BUSFLAG_DUAL;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test 6 output pins...");
@ -355,9 +355,9 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
//swap MOSI and MISO
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test 4 output pins...");
@ -365,9 +365,9 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
//swap MOSI and MISO
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
#if !DISABLED_FOR_TARGETS(ESP32C3) //There is no input-only pin on esp32c3, so this test could be ignored.
@ -375,14 +375,14 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
flags_expected = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_WPHD | SPICOMMON_BUSFLAG_GPIO_PINS;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = INPUT_ONLY_PIN, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test slave 5 output pins and MISO on input-only pin...");
flags_expected = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_WPHD | SPICOMMON_BUSFLAG_GPIO_PINS;
cfg = (spi_bus_config_t){.mosi_io_num = INPUT_ONLY_PIN, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test master 3 output pins and MOSI on input-only pin...");
@ -390,14 +390,14 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = INPUT_ONLY_PIN, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
ESP_LOGI(TAG, "test slave 3 output pins and MISO on input-only pin...");
flags_expected = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_GPIO_PINS;
cfg = (spi_bus_config_t){.mosi_io_num = INPUT_ONLY_PIN, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ESP_OK(spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL_HEX32( flags_expected, flags_o );
#endif
@ -406,72 +406,72 @@ TEST_CASE("spi bus setting with different pin configs", "[spi]")
//swap MOSI and MISO
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
ESP_LOGI(TAG, "check native flag for 4 output pins...");
flags_expected = SPICOMMON_BUSFLAG_IOMUX_PINS;
//swap MOSI and MISO
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
#if !DISABLED_FOR_TARGETS(ESP32C3) //There is no input-only pin on esp32c3, so this test could be ignored.
ESP_LOGI(TAG, "check dual flag for master 5 output pins and MISO/MOSI on input-only pin...");
flags_expected = SPICOMMON_BUSFLAG_DUAL | SPICOMMON_BUSFLAG_GPIO_PINS;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = INPUT_ONLY_PIN, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
cfg = (spi_bus_config_t){.mosi_io_num = INPUT_ONLY_PIN, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
ESP_LOGI(TAG, "check dual flag for master 3 output pins and MISO/MOSI on input-only pin...");
flags_expected = SPICOMMON_BUSFLAG_DUAL | SPICOMMON_BUSFLAG_GPIO_PINS;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = INPUT_ONLY_PIN, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
cfg = (spi_bus_config_t){.mosi_io_num = INPUT_ONLY_PIN, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
#endif
ESP_LOGI(TAG, "check sclk flag...");
flags_expected = SPICOMMON_BUSFLAG_SCLK;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = -1, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
ESP_LOGI(TAG, "check mosi flag...");
flags_expected = SPICOMMON_BUSFLAG_MOSI;
cfg = (spi_bus_config_t){.mosi_io_num = -1, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
ESP_LOGI(TAG, "check miso flag...");
flags_expected = SPICOMMON_BUSFLAG_MISO;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = -1, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
ESP_LOGI(TAG, "check quad flag...");
flags_expected = SPICOMMON_BUSFLAG_QUAD;
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = -1, .quadwp_io_num = spi_periph_signal[TEST_SPI_HOST].spiwp_iomux_pin,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
cfg = (spi_bus_config_t){.mosi_io_num = spi_periph_signal[TEST_SPI_HOST].spid_iomux_pin, .miso_io_num = spi_periph_signal[TEST_SPI_HOST].spiq_iomux_pin, .sclk_io_num = spi_periph_signal[TEST_SPI_HOST].spiclk_iomux_pin, .quadhd_io_num = spi_periph_signal[TEST_SPI_HOST].spihd_iomux_pin, .quadwp_io_num = -1,
.max_transfer_sz = 8, .flags = flags_expected};
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, 0, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_MASTER, &flags_o));
TEST_ASSERT_EQUAL(ESP_ERR_INVALID_ARG, spicommon_bus_initialize_io(TEST_SPI_HOST, &cfg, flags_expected|SPICOMMON_BUSFLAG_SLAVE, &flags_o));
}
TEST_CASE("SPI Master no response when switch from host1 (HSPI) to host2 (VSPI)", "[spi]")
@ -507,30 +507,29 @@ TEST_CASE("SPI Master no response when switch from host1 (HSPI) to host2 (VSPI)"
//initialize for first host
host = TEST_SPI_HOST;
TEST_ASSERT(spi_bus_initialize(host, &bus_config, GET_DMA_CHAN(host)) == ESP_OK);
TEST_ASSERT(spi_bus_add_device(host, &device_config, &spi) == ESP_OK);
TEST_ESP_OK(spi_bus_initialize(host, &bus_config, SPI_DMA_CH_AUTO));
TEST_ESP_OK(spi_bus_add_device(host, &device_config, &spi));
printf("before first xmit\n");
TEST_ASSERT(spi_device_transmit(spi, &transaction) == ESP_OK);
TEST_ESP_OK(spi_device_transmit(spi, &transaction));
printf("after first xmit\n");
TEST_ASSERT(spi_bus_remove_device(spi) == ESP_OK);
TEST_ASSERT(spi_bus_free(host) == ESP_OK);
TEST_ESP_OK(spi_bus_remove_device(spi));
TEST_ESP_OK(spi_bus_free(host));
//for second host and failed before
host = TEST_SLAVE_HOST;
TEST_ASSERT(spi_bus_initialize(host, &bus_config, GET_DMA_CHAN(host)) == ESP_OK);
TEST_ASSERT(spi_bus_add_device(host, &device_config, &spi) == ESP_OK);
TEST_ESP_OK(spi_bus_initialize(host, &bus_config, SPI_DMA_CH_AUTO));
TEST_ESP_OK(spi_bus_add_device(host, &device_config, &spi));
printf("before second xmit\n");
// the original version (bit mis-written) stucks here.
TEST_ASSERT(spi_device_transmit(spi, &transaction) == ESP_OK);
TEST_ESP_OK(spi_device_transmit(spi, &transaction));
// test case success when see this.
printf("after second xmit\n");
TEST_ASSERT(spi_bus_remove_device(spi) == ESP_OK);
TEST_ASSERT(spi_bus_free(host) == ESP_OK);
TEST_ESP_OK(spi_bus_remove_device(spi));
TEST_ESP_OK(spi_bus_free(host));
}
DRAM_ATTR static uint32_t data_dram[80]={0};
@ -588,11 +587,9 @@ TEST_CASE("SPI Master DMA test, TX and RX in different regions", "[spi]")
spi_device_interface_config_t devcfg=SPI_DEVICE_TEST_DEFAULT_CONFIG();
//Initialize the SPI bus
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, 1);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, SPI_DMA_CH_AUTO));
//Attach the LCD to the SPI bus
ret=spi_bus_add_device(TEST_SPI_HOST, &devcfg, &spi);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devcfg, &spi));
//connect MOSI to two devices breaks the output, fix it.
spitest_gpio_output_sel(buscfg.mosi_io_num, FUNC_GPIO, spi_periph_signal[TEST_SPI_HOST].spid_out);
@ -658,7 +655,6 @@ TEST_CASE("SPI Master DMA test: length, start, not aligned", "[spi]")
uint8_t tx_buf[320]={0x12, 0x34, 0x56, 0x78, 0x9a, 0xbc, 0xde, 0xf0, 0xaa, 0xcc, 0xff, 0xee, 0x55, 0x77, 0x88, 0x43};
uint8_t rx_buf[320];
esp_err_t ret;
spi_device_handle_t spi;
spi_bus_config_t buscfg={
.miso_io_num=PIN_NUM_MOSI,
@ -675,11 +671,9 @@ TEST_CASE("SPI Master DMA test: length, start, not aligned", "[spi]")
.pre_cb=NULL,
};
//Initialize the SPI bus
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, 1);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, SPI_DMA_CH_AUTO));
//Attach the LCD to the SPI bus
ret=spi_bus_add_device(TEST_SPI_HOST, &devcfg, &spi);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devcfg, &spi));
//connect MOSI to two devices breaks the output, fix it.
spitest_gpio_output_sel(buscfg.mosi_io_num, FUNC_GPIO, spi_periph_signal[TEST_SPI_HOST].spid_out);
@ -746,7 +740,7 @@ void test_cmd_addr(spi_slave_task_context_t *slave_context, bool lsb_first)
//initial master, mode 0, 1MHz
spi_bus_config_t buscfg=SPI_BUS_TEST_DEFAULT_CONFIG();
buscfg.quadhd_io_num = UNCONNECTED_PIN;
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, 1));
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, SPI_DMA_CH_AUTO));
spi_device_interface_config_t devcfg=SPI_DEVICE_TEST_DEFAULT_CONFIG();
devcfg.clock_speed_hz = 1*1000*1000;
if (lsb_first) devcfg.flags |= SPI_DEVICE_BIT_LSBFIRST;
@ -984,6 +978,8 @@ TEST_CASE("SPI master hd dma TX without RX test", "[spi]")
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &dev_cfg, &spi));
spi_slave_interface_config_t slave_cfg = SPI_SLAVE_TEST_DEFAULT_CONFIG();
printf("TEST_SLAVE_HOST is %d\n", TEST_SLAVE_HOST);
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &bus_cfg, &slave_cfg, TEST_SLAVE_HOST));
same_pin_func_sel(bus_cfg, dev_cfg, 0);
@ -1076,16 +1072,13 @@ TEST_CASE("SPI master hd dma TX without RX test", "[spi]")
static void speed_setup(spi_device_handle_t* spi, bool use_dma)
{
esp_err_t ret;
spi_bus_config_t buscfg=SPI_BUS_TEST_DEFAULT_CONFIG();
spi_device_interface_config_t devcfg=SPI_DEVICE_TEST_DEFAULT_CONFIG();
devcfg.queue_size=8; //We want to be able to queue 7 transactions at a time
//Initialize the SPI bus and the device to test
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, (use_dma? GET_DMA_CHAN(TEST_SPI_HOST): 0));
TEST_ASSERT(ret==ESP_OK);
ret=spi_bus_add_device(TEST_SPI_HOST, &devcfg, spi);
TEST_ASSERT(ret==ESP_OK);
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, (use_dma ? SPI_DMA_CH_AUTO : 0)));
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devcfg, spi));
}
static void sorted_array_insert(uint32_t* array, int* size, uint32_t item)

View File

@ -97,15 +97,16 @@ static void local_test_start(spi_device_handle_t *spi, int freq, const spitest_p
//slave config
slvcfg.mode = pset->mode;
slave_pull_up(&buscfg, slvcfg.spics_io_num);
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, pset->master_dma_chan));
int dma_chan = (pset->master_dma_chan == 0) ? 0 : SPI_DMA_CH_AUTO;
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buscfg, dma_chan));
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devcfg, spi));
//slave automatically use iomux pins if pins are on VSPI_* pins
buscfg.quadhd_io_num = -1;
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, pset->slave_dma_chan));
int slave_dma_chan = (pset->slave_dma_chan == 0) ? 0 : SPI_DMA_CH_AUTO;
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, slave_dma_chan));
//initialize master and slave on the same pins break some of the output configs, fix them
if (pset->master_iomux) {
@ -392,7 +393,7 @@ static spitest_param_set_t mode_pgroup[] = {
.master_limit = SPI_MASTER_FREQ_13M,
.dup = FULL_DUPLEX,
.mode = 0,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -404,7 +405,7 @@ static spitest_param_set_t mode_pgroup[] = {
.master_limit = SPI_MASTER_FREQ_13M,
.dup = FULL_DUPLEX,
.mode = 1,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -415,7 +416,7 @@ static spitest_param_set_t mode_pgroup[] = {
.master_limit = SPI_MASTER_FREQ_13M,
.dup = FULL_DUPLEX,
.mode = 2,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -427,7 +428,7 @@ static spitest_param_set_t mode_pgroup[] = {
.master_limit = SPI_MASTER_FREQ_13M,
.dup = FULL_DUPLEX,
.mode = 3,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -470,7 +471,7 @@ static spitest_param_set_t mode_pgroup[] = {
.freq_list = test_freq_mode_local,
.dup = HALF_DUPLEX_MISO,
.mode = 0,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT+SLAVE_EXTRA_DELAY_DMA,
@ -480,7 +481,7 @@ static spitest_param_set_t mode_pgroup[] = {
.freq_list = test_freq_mode_local,
.dup = HALF_DUPLEX_MISO,
.mode = 1,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -490,7 +491,7 @@ static spitest_param_set_t mode_pgroup[] = {
.freq_list = test_freq_mode_local,
.dup = HALF_DUPLEX_MISO,
.mode = 2,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT+SLAVE_EXTRA_DELAY_DMA,
@ -500,7 +501,7 @@ static spitest_param_set_t mode_pgroup[] = {
.freq_list = test_freq_mode_local,
.dup = HALF_DUPLEX_MISO,
.mode = 3,
.slave_dma_chan = 2,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.master_iomux = false,
.slave_iomux = LOCAL_MODE_TEST_SLAVE_IOMUX,
.slave_tv_ns = TV_INT_CONNECT,
@ -545,7 +546,7 @@ TEST_CASE("Slave receive correct data", "[spi]")
.master_iomux = false,
.slave_iomux = false,
.master_dma_chan = 0,
.slave_dma_chan = (dma_chan? TEST_DMA_CHAN_SLAVE: 0),
.slave_dma_chan = (dma_chan ? SPI_DMA_CH_AUTO: 0),
};
ESP_LOGI(SLAVE_TAG, "Test slave recv @ mode %d, dma enabled=%d", spi_mode, dma_chan);
@ -702,7 +703,9 @@ static void test_master_start(spi_device_handle_t *spi, int freq, const spitest_
devpset.input_delay_ns = pset->slave_tv_ns;
devpset.clock_speed_hz = freq;
if (pset->master_limit != 0 && freq > pset->master_limit) devpset.flags |= SPI_DEVICE_NO_DUMMY;
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buspset, pset->master_dma_chan));
int dma_chan = (pset->master_dma_chan == 0) ? 0 : SPI_DMA_CH_AUTO;
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &buspset, dma_chan));
TEST_ESP_OK(spi_bus_add_device(TEST_SPI_HOST, &devpset, spi));
//prepare data for the slave
@ -822,7 +825,8 @@ static void timing_slave_start(int speed, const spitest_param_set_t* pset, spite
//Enable pull-ups on SPI lines so we don't detect rogue pulses when no master is connected.
slave_pull_up(&slv_buscfg, slvcfg.spics_io_num);
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &slv_buscfg, &slvcfg, pset->slave_dma_chan));
int slave_dma_chan = (pset->slave_dma_chan == 0) ? 0 : SPI_DMA_CH_AUTO;
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &slv_buscfg, &slvcfg, slave_dma_chan));
//prepare data for the master
for (int i = 0; i < pset->test_size; i++) {
@ -1082,8 +1086,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = DELAY_HCLK_UNTIL_7M,
.mode = 0,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.length_aligned = true,
},
{ .pset_name = "mode 1, DMA",
@ -1094,8 +1098,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 1,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.length_aligned = true,
},
{ .pset_name = "mode 2, DMA",
@ -1106,8 +1110,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = DELAY_HCLK_UNTIL_7M,
.mode = 2,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.length_aligned = true,
},
{ .pset_name = "mode 3, DMA",
@ -1118,8 +1122,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 3,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
.length_aligned = true,
},
//the master can only read to 16MHz, use half-duplex mode to read at 20.
@ -1130,8 +1134,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 0,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
},
{ .pset_name = "mode 1, DMA, 20M",
.freq_list = test_freq_20M_only,
@ -1140,8 +1144,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 1,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
},
{ .pset_name = "mode 2, DMA, 20M",
.freq_list = test_freq_20M_only,
@ -1150,8 +1154,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 2,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
},
{ .pset_name = "mode 3, DMA, 20M",
.freq_list = test_freq_20M_only,
@ -1160,8 +1164,8 @@ spitest_param_set_t mode_conf[] = {
.slave_iomux = true,
.slave_tv_ns = TV_WITH_ESP_SLAVE,
.mode = 3,
.master_dma_chan = 1,
.slave_dma_chan = 1,
.master_dma_chan = SPI_DMA_CH_AUTO,
.slave_dma_chan = SPI_DMA_CH_AUTO,
},
};
TEST_SPI_MASTER_SLAVE(MODE, mode_conf, "")

View File

@ -48,7 +48,7 @@ static void master_init_nodma( spi_device_handle_t* spi)
.cs_ena_pretrans=1,
};
//Initialize the SPI bus
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, 0);
ret=spi_bus_initialize(TEST_SPI_HOST, &buscfg, SPI_DMA_CH_AUTO);
TEST_ASSERT(ret==ESP_OK);
//Attach the LCD to the SPI bus
ret=spi_bus_add_device(TEST_SPI_HOST, &devcfg, spi);
@ -75,7 +75,7 @@ static void slave_init(void)
gpio_set_pull_mode(PIN_NUM_CLK, GPIO_PULLUP_ONLY);
gpio_set_pull_mode(PIN_NUM_CS, GPIO_PULLUP_ONLY);
//Initialize SPI slave interface
TEST_ESP_OK( spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, 2) );
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, SPI_DMA_CH_AUTO));
}
TEST_CASE("test slave send unaligned","[spi]")
@ -220,7 +220,7 @@ static void unaligned_test_slave(void)
{
spi_bus_config_t buscfg = SPI_BUS_TEST_DEFAULT_CONFIG();
spi_slave_interface_config_t slvcfg = SPI_SLAVE_TEST_DEFAULT_CONFIG();
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, TEST_SLAVE_HOST));
TEST_ESP_OK(spi_slave_initialize(TEST_SLAVE_HOST, &buscfg, &slvcfg, SPI_DMA_CH_AUTO));
uint8_t *slave_send_buf = heap_caps_malloc(BUF_SIZE, MALLOC_CAP_DMA);
uint8_t *slave_recv_buf = heap_caps_calloc(BUF_SIZE, 1, MALLOC_CAP_DMA);

View File

@ -99,7 +99,7 @@ static void init_master_hd(spi_device_handle_t* spi, const spitest_param_set_t*
bus_cfg.flags |= SPICOMMON_BUSFLAG_GPIO_PINS;
#endif
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &bus_cfg, TEST_SPI_HOST));
TEST_ESP_OK(spi_bus_initialize(TEST_SPI_HOST, &bus_cfg, SPI_DMA_CH_AUTO));
spi_device_interface_config_t dev_cfg = SPI_DEVICE_TEST_DEFAULT_CONFIG();
dev_cfg.flags = SPI_DEVICE_HALFDUPLEX;
dev_cfg.command_bits = 8;
@ -122,7 +122,7 @@ static void init_slave_hd(int mode, bool append_mode, const spi_slave_hd_callbac
#endif
spi_slave_hd_slot_config_t slave_hd_cfg = SPI_SLOT_TEST_DEFAULT_CONFIG();
slave_hd_cfg.mode = mode;
slave_hd_cfg.dma_chan = TEST_SLAVE_HOST;
slave_hd_cfg.dma_chan = SPI_DMA_CH_AUTO;
if (append_mode) {
slave_hd_cfg.flags |= SPI_SLAVE_HD_APPEND_MODE;
}

View File

@ -449,7 +449,7 @@ esp_err_t set_client_config(const char *hostname, size_t hostlen, esp_tls_cfg_t
if (cfg->alpn_protos) {
#ifdef CONFIG_MBEDTLS_SSL_ALPN
if ((ret = mbedtls_ssl_conf_alpn_protocols(&tls->conf, cfg->alpn_protos) != 0)) {
if ((ret = mbedtls_ssl_conf_alpn_protocols(&tls->conf, cfg->alpn_protos)) != 0) {
ESP_LOGE(TAG, "mbedtls_ssl_conf_alpn_protocols returned -0x%x", -ret);
ESP_INT_EVENT_TRACKER_CAPTURE(tls->error_handle, ESP_TLS_ERR_TYPE_MBEDTLS, -ret);
return ESP_ERR_MBEDTLS_SSL_CONF_ALPN_PROTOCOLS_FAILED;
@ -625,6 +625,10 @@ esp_err_t esp_mbedtls_init_global_ca_store(void)
esp_err_t esp_mbedtls_set_global_ca_store(const unsigned char *cacert_pem_buf, const unsigned int cacert_pem_bytes)
{
#ifdef CONFIG_MBEDTLS_DYNAMIC_FREE_CA_CERT
ESP_LOGE(TAG, "Please disable dynamic freeing of ca cert in mbedtls (CONFIG_MBEDTLS_DYNAMIC_FREE_CA_CERT)\n in order to use the global ca_store");
return ESP_FAIL;
#endif
if (cacert_pem_buf == NULL) {
ESP_LOGE(TAG, "cacert_pem_buf is null");
return ESP_ERR_INVALID_ARG;

View File

@ -16,6 +16,16 @@
#ifndef __ESP_BROWNOUT_H
#define __ESP_BROWNOUT_H
#ifdef __cplusplus
extern "C" {
#endif
void esp_brownout_init(void);
void esp_brownout_disable(void);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -17,8 +17,8 @@
#include "esp_crypto_lock.h"
/* Lock overview:
SHA: independent
AES: independent
SHA: peripheral independent, but DMA is shared with AES
AES: peripheral independent, but DMA is shared with SHA
MPI/RSA: independent
HMAC: needs SHA
DS: needs HMAC (which needs SHA), AES and MPI
@ -30,24 +30,21 @@ static _lock_t s_crypto_ds_lock;
/* Lock for HMAC peripheral */
static _lock_t s_crypto_hmac_lock;
/* Lock for the SHA peripheral, also used by the HMAC and DS peripheral */
static _lock_t s_crypto_sha_lock;
/* Lock for the AES peripheral, also used by DS peripheral */
static _lock_t s_crypto_aes_lock;
/* Lock for the MPI/RSA peripheral, also used by the DS peripheral */
static _lock_t s_crypto_mpi_lock;
/* Single lock for SHA and AES, sharing a reserved GDMA channel */
static _lock_t s_crypto_sha_aes_lock;
void esp_crypto_hmac_lock_acquire(void)
{
_lock_acquire(&s_crypto_hmac_lock);
esp_crypto_sha_lock_acquire();
esp_crypto_sha_aes_lock_acquire();
}
void esp_crypto_hmac_lock_release(void)
{
esp_crypto_sha_lock_release();
esp_crypto_sha_aes_lock_release();
_lock_release(&s_crypto_hmac_lock);
}
@ -55,36 +52,24 @@ void esp_crypto_ds_lock_acquire(void)
{
_lock_acquire(&s_crypto_ds_lock);
esp_crypto_hmac_lock_acquire();
esp_crypto_aes_lock_acquire();
esp_crypto_mpi_lock_acquire();
}
void esp_crypto_ds_lock_release(void)
{
esp_crypto_mpi_lock_release();
esp_crypto_aes_lock_release();
esp_crypto_hmac_lock_release();
_lock_release(&s_crypto_ds_lock);
}
void esp_crypto_sha_lock_acquire(void)
void esp_crypto_sha_aes_lock_acquire(void)
{
_lock_acquire(&s_crypto_sha_lock);
_lock_acquire(&s_crypto_sha_aes_lock);
}
void esp_crypto_sha_lock_release(void)
void esp_crypto_sha_aes_lock_release(void)
{
_lock_release(&s_crypto_sha_lock);
}
void esp_crypto_aes_lock_acquire(void)
{
_lock_acquire(&s_crypto_aes_lock);
}
void esp_crypto_aes_lock_release(void)
{
_lock_release(&s_crypto_aes_lock);
_lock_release(&s_crypto_sha_aes_lock);
}
void esp_crypto_mpi_lock_acquire(void)

View File

@ -1,4 +1,4 @@
// Copyright 2015-2020 Espressif Systems (Shanghai) PTE LTD
// Copyright 2015-2021 Espressif Systems (Shanghai) PTE LTD
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -16,6 +16,16 @@
#ifndef __ESP_BROWNOUT_H
#define __ESP_BROWNOUT_H
#ifdef __cplusplus
extern "C" {
#endif
void esp_brownout_init(void);
void esp_brownout_disable(void);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -47,28 +47,17 @@ void esp_crypto_ds_lock_acquire(void);
void esp_crypto_ds_lock_release(void);
/**
* @brief Acquire lock for the SHA cryptography peripheral.
* @brief Acquire lock for the SHA and AES cryptography peripheral.
*
*/
void esp_crypto_sha_lock_acquire(void);
void esp_crypto_sha_aes_lock_acquire(void);
/**
* @brief Release lock for the SHA cryptography peripheral.
* @brief Release lock for the SHA and AES cryptography peripheral.
*
*/
void esp_crypto_sha_lock_release(void);
void esp_crypto_sha_aes_lock_release(void);
/**
* @brief Acquire lock for the aes cryptography peripheral.
*
*/
void esp_crypto_aes_lock_acquire(void);
/**
* @brief Release lock for the aes cryptography peripheral.
*
*/
void esp_crypto_aes_lock_release(void);
/**
* @brief Acquire lock for the mpi cryptography peripheral.

View File

@ -1,4 +1,4 @@
// Copyright 2015-2016 Espressif Systems (Shanghai) PTE LTD
// Copyright 2015-2021 Espressif Systems (Shanghai) PTE LTD
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -22,6 +22,8 @@ extern "C" {
void esp_brownout_init(void);
void esp_brownout_disable(void);
#ifdef __cplusplus
}
#endif

View File

@ -16,33 +16,28 @@
#include "esp_crypto_lock.h"
/* Lock for the SHA peripheral, also used by the HMAC and DS peripheral */
static _lock_t s_crypto_sha_lock;
/* Lock overview:
SHA: peripheral independent, but DMA is shared with AES
AES: peripheral independent, but DMA is shared with SHA
MPI/RSA: independent
HMAC: needs SHA
DS: needs HMAC (which needs SHA), AES and MPI
*/
/* Lock for the AES peripheral, also used by DS peripheral */
static _lock_t s_crypto_aes_lock;
/* Single lock for SHA and AES, sharing a reserved GDMA channel */
static _lock_t s_crypto_sha_aes_lock;
/* Lock for the MPI/RSA peripheral, also used by the DS peripheral */
static _lock_t s_crypto_mpi_lock;
void esp_crypto_sha_lock_acquire(void)
void esp_crypto_sha_aes_lock_acquire(void)
{
_lock_acquire(&s_crypto_sha_lock);
_lock_acquire(&s_crypto_sha_aes_lock);
}
void esp_crypto_sha_lock_release(void)
void esp_crypto_sha_aes_lock_release(void)
{
_lock_release(&s_crypto_sha_lock);
}
void esp_crypto_aes_lock_acquire(void)
{
_lock_acquire(&s_crypto_aes_lock);
}
void esp_crypto_aes_lock_release(void)
{
_lock_release(&s_crypto_aes_lock);
_lock_release(&s_crypto_sha_aes_lock);
}
void esp_crypto_mpi_lock_acquire(void)

View File

@ -16,6 +16,16 @@
#ifndef __ESP_BROWNOUT_H
#define __ESP_BROWNOUT_H
#ifdef __cplusplus
extern "C" {
#endif
void esp_brownout_init(void);
void esp_brownout_disable(void);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -27,24 +27,16 @@ extern "C" {
*/
/**
* Acquire lock for the SHA cryptography peripheral
* @brief Acquire lock for the SHA and AES cryptography peripheral.
*
*/
void esp_crypto_sha_lock_acquire(void);
void esp_crypto_sha_aes_lock_acquire(void);
/**
* Release lock for the SHA cryptography peripheral
* @brief Release lock for the SHA and AES cryptography peripheral.
*
*/
void esp_crypto_sha_lock_release(void);
/**
* Acquire lock for the AES cryptography peripheral
*/
void esp_crypto_aes_lock_acquire(void);
/**
* Release lock for the AES cryptography peripheral
*/
void esp_crypto_aes_lock_release(void);
void esp_crypto_sha_aes_lock_release(void);
/**
* Acquire lock for the MPI/RSA cryptography peripheral

View File

@ -29,7 +29,7 @@ PROVIDE ( I2C1 = 0x60027000 );
PROVIDE ( TWAI = 0x6002B000 );
PROVIDE ( GPSPI4 = 0x60037000 );
PROVIDE ( GDMA = 0x6003F000 );
PROVIDE ( UART2 = 0x60010000 );
PROVIDE ( UART2 = 0x6001e000 );
PROVIDE ( DMA = 0x6003F000 );
PROVIDE ( APB_SARADC = 0x60040000 );
PROVIDE ( LCD_CAM = 0x60041000 );

View File

@ -85,3 +85,18 @@ void esp_brownout_init(void)
brownout_hal_intr_enable(true);
#endif // not SOC_BROWNOUT_RESET_SUPPORTED
}
void esp_brownout_disable(void)
{
brownout_hal_config_t cfg = {
.enabled = false,
};
brownout_hal_config(&cfg);
#ifndef SOC_BROWNOUT_RESET_SUPPORTED
brownout_hal_intr_enable(false);
rtc_isr_deregister(rtc_brownout_isr_handler, NULL);
#endif // not SOC_BROWNOUT_RESET_SUPPORTED
}

View File

@ -90,4 +90,8 @@ menu "Power Management"
Waring: If you want to enable this option on ESP32, you should enable `GPIO_ESP32_SUPPORT_SWITCH_SLP_PULL`
at first, otherwise you will not be able to switch pullup/pulldown mode.
config PM_SLP_DEFAULT_PARAMS_OPT
bool
default n
endmenu # "Power Management"

View File

@ -49,6 +49,13 @@ typedef enum {
*/
pm_mode_t esp_pm_impl_get_mode(esp_pm_lock_type_t type, int arg);
/**
* @brief Get CPU clock frequency by power mode
* @param mode power mode
* @return CPU clock frequency
*/
int esp_pm_impl_get_cpu_freq(pm_mode_t mode);
/**
* If profiling is enabled, this data type will be used to store microsecond
* timestamps.
@ -172,6 +179,28 @@ esp_err_t esp_pm_register_inform_out_light_sleep_overhead_callback(inform_out_li
*/
esp_err_t esp_pm_unregister_inform_out_light_sleep_overhead_callback(inform_out_light_sleep_overhead_cb_t cb);
/**
* @brief Callback function type for peripherals to know light sleep default parameters
*/
typedef void (* update_light_sleep_default_params_config_cb_t)(int, int);
/**
* @brief Register peripherals light sleep default parameters configure callback
*
* This function allows you to register a callback that configure the peripherals
* of default parameters of light sleep
* @param cb function to update default parameters
*/
void esp_pm_register_light_sleep_default_params_config_callback(update_light_sleep_default_params_config_cb_t cb);
/**
* @brief Unregister peripherals light sleep default parameters configure Callback
*
* This function allows you to unregister a callback that configure the peripherals
* of default parameters of light sleep
*/
void esp_pm_unregister_light_sleep_default_params_config_callback(void);
#ifdef CONFIG_PM_PROFILING
#define WITH_PROFILING
#endif

View File

@ -152,7 +152,7 @@ static esp_pm_lock_handle_t s_rtos_lock_handle[portNUM_PROCESSORS];
/* Lookup table of CPU frequency configs to be used in each mode.
* Initialized by esp_pm_impl_init and modified by esp_pm_configure.
*/
rtc_cpu_freq_config_t s_cpu_freq_by_mode[PM_MODE_COUNT];
static rtc_cpu_freq_config_t s_cpu_freq_by_mode[PM_MODE_COUNT];
/* Whether automatic light sleep is enabled */
static bool s_light_sleep_en = false;
@ -198,6 +198,9 @@ static const char* TAG = "pm";
static void do_switch(pm_mode_t new_mode);
static void leave_idle(void);
static void on_freq_update(uint32_t old_ticks_per_us, uint32_t ticks_per_us);
#if CONFIG_PM_SLP_DEFAULT_PARAMS_OPT
static void esp_pm_light_sleep_default_params_config(int min_freq_mhz, int max_freq_mhz);
#endif
pm_mode_t esp_pm_impl_get_mode(esp_pm_lock_type_t type, int arg)
{
@ -311,6 +314,12 @@ esp_err_t esp_pm_configure(const void* vconfig)
}
#endif
#if CONFIG_PM_SLP_DEFAULT_PARAMS_OPT
if (config->light_sleep_enable) {
esp_pm_light_sleep_default_params_config(min_freq_mhz, max_freq_mhz);
}
#endif
return ESP_OK;
}
@ -660,6 +669,19 @@ void esp_pm_impl_dump_stats(FILE* out)
}
#endif // WITH_PROFILING
int esp_pm_impl_get_cpu_freq(pm_mode_t mode)
{
int freq_mhz;
if (mode >= PM_MODE_LIGHT_SLEEP && mode < PM_MODE_COUNT) {
portENTER_CRITICAL(&s_switch_lock);
freq_mhz = s_cpu_freq_by_mode[mode].freq_mhz;
portEXIT_CRITICAL(&s_switch_lock);
} else {
abort();
}
return freq_mhz;
}
void esp_pm_impl_init(void)
{
#if defined(CONFIG_ESP_CONSOLE_UART)
@ -816,3 +838,28 @@ void periph_inform_out_light_sleep_overhead(uint32_t out_light_sleep_time)
}
}
}
static update_light_sleep_default_params_config_cb_t s_light_sleep_default_params_config_cb = NULL;
void esp_pm_register_light_sleep_default_params_config_callback(update_light_sleep_default_params_config_cb_t cb)
{
if (s_light_sleep_default_params_config_cb == NULL) {
s_light_sleep_default_params_config_cb = cb;
}
}
void esp_pm_unregister_light_sleep_default_params_config_callback(void)
{
if (s_light_sleep_default_params_config_cb) {
s_light_sleep_default_params_config_cb = NULL;
}
}
#if CONFIG_PM_SLP_DEFAULT_PARAMS_OPT
static void esp_pm_light_sleep_default_params_config(int min_freq_mhz, int max_freq_mhz)
{
if (s_light_sleep_default_params_config_cb) {
(*s_light_sleep_default_params_config_cb)(min_freq_mhz, max_freq_mhz);
}
}
#endif

View File

@ -1500,8 +1500,6 @@ esp_pp_rom_version_get = 0x400015b0;
RC_GetBlockAckTime = 0x400015b4;
ebuf_list_remove = 0x400015b8;
esf_buf_alloc = 0x400015bc;
esf_buf_alloc_dynamic = 0x400015c0;
esf_buf_recycle = 0x400015c4;
GetAccess = 0x400015c8;
hal_mac_is_low_rate_enabled = 0x400015cc;
hal_mac_tx_get_blockack = 0x400015d0;
@ -1569,7 +1567,6 @@ ppEnqueueRxq = 0x400016c8;
ppEnqueueTxDone = 0x400016cc;
ppGetTxQFirstAvail_Locked = 0x400016d0;
ppGetTxframe = 0x400016d4;
ppMapTxQueue = 0x400016d8;
ppProcTxSecFrame = 0x400016dc;
ppProcessRxPktHdr = 0x400016e0;
ppProcessTxQ = 0x400016e4;
@ -1604,7 +1601,6 @@ rcampduuprate = 0x40001754;
rcClearCurAMPDUSched = 0x40001758;
rcClearCurSched = 0x4000175c;
rcClearCurStat = 0x40001760;
rcGetSched = 0x40001764;
rcLowerSched = 0x40001768;
rcSetTxAmpduLimit = 0x4000176c;
rcTxUpdatePer = 0x40001770;
@ -1624,7 +1620,6 @@ TRC_PER_IS_GOOD = 0x400017a4;
trc_SetTxAmpduState = 0x400017a8;
trc_tid_isTxAmpduOperational = 0x400017ac;
trcAmpduSetState = 0x400017b0;
wDevCheckBlockError = 0x400017b4;
wDev_AppendRxBlocks = 0x400017b8;
wDev_DiscardFrame = 0x400017bc;
wDev_GetNoiseFloor = 0x400017c0;

View File

@ -135,6 +135,7 @@ static int async_memcpy_prepare_receive(async_memcpy_t asmcp, void *buffer, size
while (size > DMA_DESCRIPTOR_BUFFER_MAX_SIZE) {
if (desc->dw0.owner != DMA_DESCRIPTOR_BUFFER_OWNER_DMA) {
desc->dw0.suc_eof = 0;
desc->dw0.size = DMA_DESCRIPTOR_BUFFER_MAX_SIZE;
desc->buffer = &buf[prepared_length];
desc = desc->next; // move to next descriptor
@ -148,6 +149,7 @@ static int async_memcpy_prepare_receive(async_memcpy_t asmcp, void *buffer, size
if (size) {
if (desc->dw0.owner != DMA_DESCRIPTOR_BUFFER_OWNER_DMA) {
end = desc; // the last descriptor used
desc->dw0.suc_eof = 0;
desc->dw0.size = size;
desc->buffer = &buf[prepared_length];
desc = desc->next; // move to next descriptor

View File

@ -34,6 +34,13 @@ typedef enum {
ESP_EXT1_WAKEUP_ANY_HIGH = 1 //!< Wake the chip when any of the selected GPIOs go high
} esp_sleep_ext1_wakeup_mode_t;
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
typedef enum {
ESP_GPIO_WAKEUP_GPIO_LOW = 0,
ESP_GPIO_WAKEUP_GPIO_HIGH = 1
} esp_deepsleep_gpio_wake_up_mode_t;
#endif
/**
* @brief Power domains which can be powered down in sleep mode
*/
@ -151,8 +158,6 @@ touch_pad_t esp_sleep_get_touchpad_wakeup_status(void);
#endif // SOC_TOUCH_SENSOR_NUM > 0
#if SOC_PM_SUPPORT_EXT_WAKEUP
/**
* @brief Returns true if a GPIO number is valid for use as wakeup source.
*
@ -164,6 +169,8 @@ touch_pad_t esp_sleep_get_touchpad_wakeup_status(void);
*/
bool esp_sleep_is_valid_wakeup_gpio(gpio_num_t gpio_num);
#if SOC_PM_SUPPORT_EXT_WAKEUP
/**
* @brief Enable wakeup using a pin
*
@ -224,6 +231,32 @@ esp_err_t esp_sleep_enable_ext1_wakeup(uint64_t mask, esp_sleep_ext1_wakeup_mode
#endif // SOC_PM_SUPPORT_EXT_WAKEUP
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
/**
* @brief Enable wakeup using specific gpio pins
*
* This function enables an IO pin to wake the chip from deep sleep
*
* @note This function does not modify pin configuration. The pins are
* configured in esp_sleep_start, immediately before
* entering sleep mode.
*
* @note You don't need to care to pull-up or pull-down before using this
* function, because this will be done in esp_sleep_start based on
* param mask you give. BTW, when you use low level to wake up the
* chip, we strongly recommand you to add external registors(pull-up).
*
* @param gpio_pin_mask Bit mask of GPIO numbers which will cause wakeup. Only GPIOs
* which are have RTC functionality can be used in this bit map.
* @param mode Select logic function used to determine wakeup condition:
* - ESP_GPIO_WAKEUP_GPIO_LOW: wake up when the gpio turn to low.
* - ESP_GPIO_WAKEUP_GPIO_HIGH: wake up when the gpio turn to high.
* @return
* - ESP_OK on success
* - ESP_ERR_INVALID_ARG if gpio num is more than 5 or mode is invalid,
*/
esp_err_t esp_deep_sleep_enable_gpio_wakeup(uint64_t gpio_pin_mask, esp_deepsleep_gpio_wake_up_mode_t mode);
#endif
/**
* @brief Enable wakeup from light sleep using GPIOs
*
@ -279,6 +312,17 @@ esp_err_t esp_sleep_enable_wifi_wakeup(void);
*/
uint64_t esp_sleep_get_ext1_wakeup_status(void);
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
/**
* @brief Get the bit mask of GPIOs which caused wakeup (gpio)
*
* If wakeup was caused by another source, this function will return 0.
*
* @return bit mask, if GPIOn caused wakeup, BIT(n) will be set
*/
uint64_t esp_sleep_get_gpio_wakeup_status(void);
#endif //SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
/**
* @brief Set power down mode for an RTC power domain in sleep mode
*

View File

@ -71,6 +71,7 @@
#include "esp_flash_encrypt.h"
#include "hal/rtc_io_hal.h"
#include "hal/gpio_hal.h"
#include "hal/wdt_hal.h"
#include "soc/rtc.h"
#include "soc/efuse_reg.h"
@ -460,7 +461,11 @@ void IRAM_ATTR call_start_cpu0(void)
esp_rom_uart_set_clock_baudrate(CONFIG_ESP_CONSOLE_UART_NUM, clock_hz, CONFIG_ESP_CONSOLE_UART_BAUDRATE);
#endif
#if SOC_RTCIO_HOLD_SUPPORTED
rtcio_hal_unhold_all();
#else
gpio_hal_force_unhold_all();
#endif
esp_cache_err_int_init();

View File

@ -58,6 +58,7 @@
#include "esp32s2/clk.h"
#include "esp32s2/rom/cache.h"
#include "esp32s2/rom/rtc.h"
#include "esp32s2/brownout.h"
#include "soc/extmem_reg.h"
#include "driver/gpio.h"
#elif CONFIG_IDF_TARGET_ESP32S3
@ -71,8 +72,6 @@
#include "esp32c3/rom/rtc.h"
#include "soc/extmem_reg.h"
#include "esp_heap_caps.h"
#include "hal/rtc_hal.h"
#include "soc/rtc_caps.h"
#endif
// If light sleep time is less than that, don't power down flash
@ -139,6 +138,8 @@ typedef struct {
uint32_t ext1_rtc_gpio_mask : 18;
uint32_t ext0_trigger_level : 1;
uint32_t ext0_rtc_gpio_num : 5;
uint32_t gpio_wakeup_mask : 6;
uint32_t gpio_trigger_mode :6;
uint32_t sleep_time_adjustment;
uint32_t ccount_ticks_record;
uint32_t sleep_time_overhead_out;
@ -175,6 +176,9 @@ static void timer_wakeup_prepare(void);
#if CONFIG_IDF_TARGET_ESP32S2 || CONFIG_IDF_TARGET_ESP32S3
static void touch_wakeup_prepare(void);
#endif
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
static void esp_deep_sleep_wakeup_prepare(void);
#endif
#if CONFIG_MAC_BB_PD
#define MAC_BB_POWER_DOWN_CB_NO 2
@ -454,15 +458,15 @@ static uint32_t IRAM_ATTR esp_sleep_start(uint32_t pd_flags)
suspend_uarts();
}
#if CONFIG_MAC_BB_PD
mac_bb_power_down_cb_execute();
#endif
// Save current frequency and switch to XTAL
rtc_cpu_freq_config_t cpu_freq_config;
rtc_clk_cpu_freq_get_config(&cpu_freq_config);
rtc_clk_cpu_freq_set_xtal();
#if CONFIG_MAC_BB_PD
mac_bb_power_down_cb_execute();
#endif
#if SOC_PM_SUPPORT_EXT_WAKEUP
// Configure pins for external wakeup
if (s_config.wakeup_triggers & RTC_EXT0_TRIG_EN) {
@ -473,6 +477,12 @@ static uint32_t IRAM_ATTR esp_sleep_start(uint32_t pd_flags)
}
#endif
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
if (s_config.wakeup_triggers & RTC_GPIO_TRIG_EN) {
esp_deep_sleep_wakeup_prepare();
}
#endif
#ifdef CONFIG_IDF_TARGET_ESP32
// Enable ULP wakeup
if (s_config.wakeup_triggers & RTC_ULP_TRIG_EN) {
@ -508,7 +518,7 @@ static uint32_t IRAM_ATTR esp_sleep_start(uint32_t pd_flags)
*/
#if CONFIG_IDF_TARGET_ESP32
reject_triggers = RTC_CNTL_LIGHT_SLP_REJECT_EN_M | RTC_CNTL_GPIO_REJECT_EN_M;
#elif CONFIG_IDF_TARGET_ESP32S2
#else
reject_triggers = s_config.wakeup_triggers;
#endif
}
@ -588,6 +598,12 @@ inline static uint32_t IRAM_ATTR call_rtc_sleep_start(uint32_t reject_triggers)
void IRAM_ATTR esp_deep_sleep_start(void)
{
#if CONFIG_IDF_TARGET_ESP32S2
/* Due to hardware limitations, on S2 the brownout detector sometimes trigger during deep sleep
to circumvent this we disable the brownout detector before sleeping */
esp_brownout_disable();
#endif //CONFIG_IDF_TARGET_ESP32S2
// record current RTC time
s_config.rtc_ticks_at_sleep_start = rtc_time_get();
@ -606,8 +622,18 @@ void IRAM_ATTR esp_deep_sleep_start(void)
// Correct the sleep time
s_config.sleep_time_adjustment = DEEP_SLEEP_TIME_OVERHEAD_US;
uint32_t force_pd_flags = RTC_SLEEP_PD_DIG | RTC_SLEEP_PD_VDDSDIO;
#if SOC_PM_SUPPORT_WIFI_PD
force_pd_flags |= RTC_SLEEP_PD_WIFI;
#endif
#if SOC_PM_SUPPORT_BT_PD
force_pd_flags |= RTC_SLEEP_PD_BT;
#endif
// Enter sleep
esp_sleep_start(RTC_SLEEP_PD_DIG | RTC_SLEEP_PD_VDDSDIO | pd_flags);
esp_sleep_start(force_pd_flags | pd_flags);
// Because RTC is in a slower clock domain than the CPU, it
// can take several CPU cycles for the sleep mode to start.
@ -915,17 +941,17 @@ touch_pad_t esp_sleep_get_touchpad_wakeup_status(void)
#endif // SOC_TOUCH_SENSOR_NUM > 0
#if SOC_PM_SUPPORT_EXT_WAKEUP
bool esp_sleep_is_valid_wakeup_gpio(gpio_num_t gpio_num)
{
#if SOC_RTCIO_INPUT_OUTPUT_SUPPORTED
return RTC_GPIO_IS_VALID_GPIO(gpio_num);
#else
return GPIO_IS_VALID_GPIO(gpio_num);
return GPIO_IS_DEEP_SLEEP_WAKEUP_VALID_GPIO(gpio_num);
#endif // SOC_RTCIO_INPUT_OUTPUT_SUPPORTED
}
#if SOC_PM_SUPPORT_EXT_WAKEUP
esp_err_t esp_sleep_enable_ext0_wakeup(gpio_num_t gpio_num, int level)
{
if (level < 0 || level > 1) {
@ -1035,6 +1061,66 @@ uint64_t esp_sleep_get_ext1_wakeup_status(void)
#endif // SOC_PM_SUPPORT_EXT_WAKEUP
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
uint64_t esp_sleep_get_gpio_wakeup_status(void)
{
if (esp_sleep_get_wakeup_cause() != ESP_SLEEP_WAKEUP_GPIO) {
return 0;
}
return rtc_hal_gpio_get_wakeup_pins();
}
static void esp_deep_sleep_wakeup_prepare(void)
{
for (gpio_num_t gpio_idx = GPIO_NUM_0; gpio_idx < GPIO_NUM_MAX; gpio_idx++) {
if (((1ULL << gpio_idx) & s_config.gpio_wakeup_mask) == 0) {
continue;
}
if (s_config.gpio_trigger_mode & BIT(gpio_idx)) {
ESP_ERROR_CHECK(gpio_pullup_dis(gpio_idx));
ESP_ERROR_CHECK(gpio_pulldown_en(gpio_idx));
} else {
ESP_ERROR_CHECK(gpio_pullup_en(gpio_idx));
ESP_ERROR_CHECK(gpio_pulldown_dis(gpio_idx));
}
rtc_hal_gpio_set_wakeup_pins();
ESP_ERROR_CHECK(gpio_hold_en(gpio_idx));
}
}
esp_err_t esp_deep_sleep_enable_gpio_wakeup(uint64_t gpio_pin_mask, esp_deepsleep_gpio_wake_up_mode_t mode)
{
if (mode > ESP_GPIO_WAKEUP_GPIO_HIGH) {
ESP_LOGE(TAG, "invalid mode");
return ESP_ERR_INVALID_ARG;
}
gpio_int_type_t intr_type = ((mode == ESP_GPIO_WAKEUP_GPIO_LOW) ? GPIO_INTR_LOW_LEVEL : GPIO_INTR_HIGH_LEVEL);
esp_err_t err = ESP_OK;
for (gpio_num_t gpio_idx = GPIO_NUM_0; gpio_idx < GPIO_NUM_MAX; gpio_idx++, gpio_pin_mask >>= 1) {
if ((gpio_pin_mask & 1) == 0) {
continue;
}
if (!esp_sleep_is_valid_wakeup_gpio(gpio_idx)) {
ESP_LOGE(TAG, "invalid mask, please ensure gpio number is no more than 5");
return ESP_ERR_INVALID_ARG;
}
err = gpio_deep_sleep_wakeup_enable(gpio_idx, intr_type);
s_config.gpio_wakeup_mask |= BIT(gpio_idx);
if (mode == ESP_GPIO_WAKEUP_GPIO_HIGH) {
s_config.gpio_trigger_mode |= (mode << gpio_idx);
} else {
s_config.gpio_trigger_mode &= ~(mode << gpio_idx);
}
}
s_config.wakeup_triggers |= RTC_GPIO_TRIG_EN;
rtc_hal_gpio_clear_wakeup_pins();
return err;
}
#endif //SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
esp_err_t esp_sleep_enable_gpio_wakeup(void)
{
#if CONFIG_IDF_TARGET_ESP32

View File

@ -18,6 +18,7 @@
#include "test_utils.h"
#include "sdkconfig.h"
#include "esp_rom_sys.h"
#include "esp_timer.h"
#if !TEMPORARY_DISABLED_FOR_TARGETS(ESP32C3, ESP32S3)
@ -30,6 +31,9 @@
#elif CONFIG_IDF_TARGET_ESP32S3
#include "esp32s3/clk.h"
#include "esp32s3/rom/rtc.h"
#elif CONFIG_IDF_TARGET_ESP32C3
#include "esp32c3/clk.h"
#include "esp32c3/rom/rtc.h"
#endif
#define ESP_EXT0_WAKEUP_LEVEL_LOW 0
@ -527,3 +531,28 @@ static void check_time_deepsleep(void)
TEST_CASE_MULTIPLE_STAGES("check a time after wakeup from deep sleep", "[deepsleep][reset=DEEPSLEEP_RESET]", trigger_deepsleep, check_time_deepsleep);
#endif // #if !TEMPORARY_DISABLED_FOR_TARGETS(ESP32C3, ESP32S3)
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
static void gpio_deepsleep_wakeup_config(void)
{
gpio_config_t io_conf = {
.mode = GPIO_MODE_INPUT,
.pin_bit_mask = ((1ULL << 2) | (1ULL << 4))
};
ESP_ERROR_CHECK(gpio_config(&io_conf));
}
TEST_CASE("wake up using GPIO (2 or 4 high)", "[deepsleep][ignore]")
{
gpio_deepsleep_wakeup_config();
ESP_ERROR_CHECK(esp_deep_sleep_enable_gpio_wakeup(((1ULL << 2) | (1ULL << 4)) , ESP_GPIO_WAKEUP_GPIO_HIGH));
esp_deep_sleep_start();
}
TEST_CASE("wake up using GPIO (2 or 4 low)", "[deepsleep][ignore]")
{
gpio_deepsleep_wakeup_config();
ESP_ERROR_CHECK(esp_deep_sleep_enable_gpio_wakeup(((1ULL << 2) | (1ULL << 4)) , ESP_GPIO_WAKEUP_GPIO_LOW));
esp_deep_sleep_start();
}
#endif // SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP

View File

@ -92,7 +92,7 @@ menu "Wi-Fi"
config ESP32_WIFI_CACHE_TX_BUFFER_NUM
int "Max number of WiFi cache TX buffers"
depends on (ESP32_SPIRAM_SUPPORT || ESP32S2_SPIRAM_SUPPORT)
depends on (ESP32_SPIRAM_SUPPORT || ESP32S2_SPIRAM_SUPPORT || ESP32S3_SPIRAM_SUPPORT)
range 16 128
default 32
help
@ -323,6 +323,8 @@ menu "Wi-Fi"
config ESP_WIFI_SLP_IRAM_OPT
bool "WiFi SLP IRAM speed optimization"
depends on FREERTOS_USE_TICKLESS_IDLE
select PM_SLP_DEFAULT_PARAMS_OPT
help
Select this option to place called Wi-Fi library TBTT process and receive beacon functions in IRAM.
Some functions can be put in IRAM either by ESP32_WIFI_IRAM_OPT and ESP32_WIFI_RX_IRAM_OPT, or this one.
@ -331,6 +333,22 @@ menu "Wi-Fi"
If neither of them are enabled, the other 7.4KB IRAM memory would be taken by this option.
Wi-Fi power-save mode average current would be reduced if this option is enabled.
config ESP_WIFI_SLP_DEFAULT_MIN_ACTIVE_TIME
int "Minimum active time"
range 8 60
default 8
depends on ESP_WIFI_SLP_IRAM_OPT
help
The minimum timeout for waiting to receive data, unit: milliseconds.
config ESP_WIFI_SLP_DEFAULT_MAX_ACTIVE_TIME
int "Maximum keep alive time"
range 10 60
default 60
depends on ESP_WIFI_SLP_IRAM_OPT
help
The maximum time that wifi keep alive, unit: seconds.
config ESP_WIFI_FTM_INITIATOR_SUPPORT
bool "FTM Initiator support"
default y
@ -368,6 +386,12 @@ menu "Wi-Fi"
default y
depends on (IDF_TARGET_ESP32S2 || IDF_TARGET_ESP32C3)
config ESP_WIFI_STA_DISCONNECTED_PM_ENABLE
bool "Power Management for station at disconnected"
help
Select this option to enable power_management for station when disconnected.
Chip will do modem-sleep when rf module is not in use any more.
endmenu # Wi-Fi
menu "PHY"

View File

@ -255,7 +255,7 @@ static void semphr_delete_wrapper(void *semphr)
static void wifi_thread_semphr_free(void* data)
{
xSemaphoreHandle *sem = (xSemaphoreHandle*)(data);
SemaphoreHandle_t *sem = (SemaphoreHandle_t*)(data);
if (sem) {
vSemaphoreDelete(sem);
@ -266,7 +266,7 @@ static void * wifi_thread_semphr_get_wrapper(void)
{
static bool s_wifi_thread_sem_key_init = false;
static pthread_key_t s_wifi_thread_sem_key;
xSemaphoreHandle sem = NULL;
SemaphoreHandle_t sem = NULL;
if (s_wifi_thread_sem_key_init == false) {
if (0 != pthread_key_create(&s_wifi_thread_sem_key, wifi_thread_semphr_free)) {

View File

@ -382,12 +382,12 @@ static void IRAM_ATTR timer_disarm_wrapper(void *timer)
ets_timer_disarm(timer);
}
static void IRAM_ATTR timer_done_wrapper(void *ptimer)
static void timer_done_wrapper(void *ptimer)
{
ets_timer_done(ptimer);
}
static void IRAM_ATTR timer_setfn_wrapper(void *ptimer, void *pfunction, void *parg)
static void timer_setfn_wrapper(void *ptimer, void *pfunction, void *parg)
{
ets_timer_setfn(ptimer, pfunction, parg);
}
@ -435,13 +435,12 @@ static int get_time_wrapper(void *t)
return os_get_time(t);
}
#define WIFI_LIGHT_SLEEP_CLK_WIDTH 12
static uint32_t esp_clk_slowclk_cal_get_wrapper(void)
{
/* The bit width of WiFi light sleep clock calibration is 12 while the one of
* system is 19. It should shift 19 - 12 = 7.
*/
return (esp_clk_slowclk_cal_get() >> (RTC_CLK_CAL_FRACT - WIFI_LIGHT_SLEEP_CLK_WIDTH));
return (esp_clk_slowclk_cal_get() >> (RTC_CLK_CAL_FRACT - SOC_WIFI_LIGHT_SLEEP_CLK_WIDTH));
}
static void * IRAM_ATTR malloc_internal_wrapper(size_t size)

View File

@ -40,6 +40,7 @@
#include "esp_phy_init.h"
#include "esp32s2/clk.h"
#include "soc/dport_reg.h"
#include "soc/rtc.h"
#include "soc/syscon_reg.h"
#include "phy_init_data.h"
#include "driver/periph_ctrl.h"
@ -242,7 +243,7 @@ static void semphr_delete_wrapper(void *semphr)
static void wifi_thread_semphr_free(void* data)
{
xSemaphoreHandle *sem = (xSemaphoreHandle*)(data);
SemaphoreHandle_t *sem = (SemaphoreHandle_t*)(data);
if (sem) {
vSemaphoreDelete(sem);
@ -253,7 +254,7 @@ static void * wifi_thread_semphr_get_wrapper(void)
{
static bool s_wifi_thread_sem_key_init = false;
static pthread_key_t s_wifi_thread_sem_key;
xSemaphoreHandle sem = NULL;
SemaphoreHandle_t sem = NULL;
if (s_wifi_thread_sem_key_init == false) {
if (0 != pthread_key_create(&s_wifi_thread_sem_key, wifi_thread_semphr_free)) {
@ -455,7 +456,7 @@ static uint32_t esp_clk_slowclk_cal_get_wrapper(void)
/* The bit width of WiFi light sleep clock calibration is 12 while the one of
* system is 19. It should shift 19 - 12 = 7.
*/
return (esp_clk_slowclk_cal_get() >> 7);
return (esp_clk_slowclk_cal_get() >> (RTC_CLK_CAL_FRACT - SOC_WIFI_LIGHT_SLEEP_CLK_WIDTH));
}
static void * IRAM_ATTR malloc_internal_wrapper(size_t size)
@ -536,7 +537,7 @@ static int coex_wifi_request_wrapper(uint32_t event, uint32_t latency, uint32_t
#endif
}
static int coex_wifi_release_wrapper(uint32_t event)
static IRAM_ATTR int coex_wifi_release_wrapper(uint32_t event)
{
#if CONFIG_SW_COEXIST_ENABLE
return coex_wifi_release(event);

View File

@ -57,19 +57,43 @@ extern void wifi_apb80m_request(void);
extern void wifi_apb80m_release(void);
#endif
/*
If CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP is enabled. Prefer to allocate a chunk of memory in SPIRAM firstly.
If failed, try to allocate it in internal memory then.
*/
IRAM_ATTR void *wifi_malloc( size_t size )
{
#if CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP
return heap_caps_malloc_prefer(size, 2, MALLOC_CAP_DEFAULT|MALLOC_CAP_SPIRAM, MALLOC_CAP_DEFAULT|MALLOC_CAP_INTERNAL);
#else
return malloc(size);
#endif
}
/*
If CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP is enabled. Prefer to allocate a chunk of memory in SPIRAM firstly.
If failed, try to allocate it in internal memory then.
*/
IRAM_ATTR void *wifi_realloc( void *ptr, size_t size )
{
#if CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP
return heap_caps_realloc_prefer(ptr, size, 2, MALLOC_CAP_DEFAULT|MALLOC_CAP_SPIRAM, MALLOC_CAP_DEFAULT|MALLOC_CAP_INTERNAL);
#else
return realloc(ptr, size);
#endif
}
/*
If CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP is enabled. Prefer to allocate a chunk of memory in SPIRAM firstly.
If failed, try to allocate it in internal memory then.
*/
IRAM_ATTR void *wifi_calloc( size_t n, size_t size )
{
#if CONFIG_SPIRAM_TRY_ALLOCATE_WIFI_LWIP
return heap_caps_calloc_prefer(n, size, 2, MALLOC_CAP_DEFAULT|MALLOC_CAP_SPIRAM, MALLOC_CAP_DEFAULT|MALLOC_CAP_INTERNAL);
#else
return calloc(n, size);
#endif
}
static void * IRAM_ATTR wifi_zalloc_wrapper(size_t size)
@ -87,14 +111,48 @@ wifi_static_queue_t* wifi_create_queue( int queue_len, int item_size)
return NULL;
}
#if CONFIG_SPIRAM_USE_MALLOC
queue->storage = heap_caps_calloc(1, sizeof(StaticQueue_t) + (queue_len*item_size), MALLOC_CAP_INTERNAL|MALLOC_CAP_8BIT);
if (!queue->storage) {
goto _error;
}
queue->handle = xQueueCreateStatic( queue_len, item_size, ((uint8_t*)(queue->storage)) + sizeof(StaticQueue_t), (StaticQueue_t*)(queue->storage));
if (!queue->handle) {
goto _error;
}
return queue;
_error:
if (queue) {
if (queue->storage) {
free(queue->storage);
}
free(queue);
}
return NULL;
#else
queue->handle = xQueueCreate( queue_len, item_size);
return queue;
#endif
}
void wifi_delete_queue(wifi_static_queue_t *queue)
{
if (queue) {
vQueueDelete(queue->handle);
#if CONFIG_SPIRAM_USE_MALLOC
if (queue->storage) {
free(queue->storage);
}
#endif
free(queue);
}
}
@ -136,7 +194,7 @@ static void set_isr_wrapper(int32_t n, void *f, void *arg)
static void * spin_lock_create_wrapper(void)
{
portMUX_TYPE tmp = portMUX_INITIALIZER_UNLOCKED;
void *mux = malloc(sizeof(portMUX_TYPE));
void *mux = heap_caps_malloc(sizeof(portMUX_TYPE), MALLOC_CAP_8BIT|MALLOC_CAP_INTERNAL);
if (mux) {
memcpy(mux,&tmp,sizeof(portMUX_TYPE));
@ -369,12 +427,12 @@ static void IRAM_ATTR timer_disarm_wrapper(void *timer)
ets_timer_disarm(timer);
}
static void IRAM_ATTR timer_done_wrapper(void *ptimer)
static void timer_done_wrapper(void *ptimer)
{
ets_timer_done(ptimer);
}
static void IRAM_ATTR timer_setfn_wrapper(void *ptimer, void *pfunction, void *parg)
static void timer_setfn_wrapper(void *ptimer, void *pfunction, void *parg)
{
ets_timer_setfn(ptimer, pfunction, parg);
}
@ -407,14 +465,14 @@ static void IRAM_ATTR wifi_rtc_disable_iso_wrapper(void)
#endif
}
static void IRAM_ATTR wifi_clock_enable_wrapper(void)
static void wifi_clock_enable_wrapper(void)
{
periph_module_enable(PERIPH_WIFI_MODULE);
wifi_module_enable();
}
static void IRAM_ATTR wifi_clock_disable_wrapper(void)
static void wifi_clock_disable_wrapper(void)
{
periph_module_disable(PERIPH_WIFI_MODULE);
wifi_module_disable();
}
static int get_time_wrapper(void *t)
@ -422,13 +480,12 @@ static int get_time_wrapper(void *t)
return os_get_time(t);
}
#define WIFI_LIGHT_SLEEP_CLK_WIDTH 12
static uint32_t esp_clk_slowclk_cal_get_wrapper(void)
{
/* The bit width of WiFi light sleep clock calibration is 12 while the one of
* system is 19. It should shift 19 - 12 = 7.
*/
return (esp_clk_slowclk_cal_get() >> (RTC_CLK_CAL_FRACT - WIFI_LIGHT_SLEEP_CLK_WIDTH));
return (esp_clk_slowclk_cal_get() >> (RTC_CLK_CAL_FRACT - SOC_WIFI_LIGHT_SLEEP_CLK_WIDTH));
}
static void * IRAM_ATTR malloc_internal_wrapper(size_t size)
@ -452,29 +509,6 @@ static void * IRAM_ATTR zalloc_internal_wrapper(size_t size)
return ptr;
}
static esp_err_t nvs_open_wrapper(const char* name, uint32_t open_mode, nvs_handle_t *out_handle)
{
return nvs_open(name,(nvs_open_mode_t)open_mode, out_handle);
}
static void esp_log_writev_wrapper(uint32_t level, const char *tag, const char *format, va_list args)
{
return esp_log_writev((esp_log_level_t)level,tag,format,args);
}
static void esp_log_write_wrapper(uint32_t level,const char *tag,const char *format, ...)
{
va_list list;
va_start(list, format);
esp_log_writev((esp_log_level_t)level, tag, format, list);
va_end(list);
}
static esp_err_t esp_read_mac_wrapper(uint8_t* mac, uint32_t type)
{
return esp_read_mac(mac, (esp_mac_type_t)type);
}
static int coex_init_wrapper(void)
{
#if CONFIG_SW_COEXIST_ENABLE
@ -641,6 +675,11 @@ static void IRAM_ATTR esp_empty_wrapper(void)
}
int32_t IRAM_ATTR coex_is_in_isr_wrapper(void)
{
return !xPortCanYield();
}
wifi_osi_funcs_t g_wifi_osi_funcs = {
._version = ESP_WIFI_OS_ADAPTER_VERSION,
._env_is_chip = env_is_chip_wrapper,
@ -697,7 +736,7 @@ wifi_osi_funcs_t g_wifi_osi_funcs = {
._phy_disable = esp_phy_disable,
._phy_enable = esp_phy_enable,
._phy_update_country_info = esp_phy_update_country_info,
._read_mac = esp_read_mac_wrapper,
._read_mac = esp_read_mac,
._timer_arm = timer_arm_wrapper,
._timer_disarm = timer_disarm_wrapper,
._timer_done = timer_done_wrapper,
@ -715,7 +754,7 @@ wifi_osi_funcs_t g_wifi_osi_funcs = {
._nvs_get_u8 = nvs_get_u8,
._nvs_set_u16 = nvs_set_u16,
._nvs_get_u16 = nvs_get_u16,
._nvs_open = nvs_open_wrapper,
._nvs_open = nvs_open,
._nvs_close = nvs_close,
._nvs_commit = nvs_commit,
._nvs_set_blob = nvs_set_blob,
@ -725,8 +764,8 @@ wifi_osi_funcs_t g_wifi_osi_funcs = {
._get_time = get_time_wrapper,
._random = os_random,
._slowclk_cal_get = esp_clk_slowclk_cal_get_wrapper,
._log_write = esp_log_write_wrapper,
._log_writev = esp_log_writev_wrapper,
._log_write = esp_log_write,
._log_writev = esp_log_writev,
._log_timestamp = esp_log_timestamp,
._malloc_internal = malloc_internal_wrapper,
._realloc_internal = realloc_internal_wrapper,
@ -769,7 +808,7 @@ coex_adapter_funcs_t g_coex_adapter_funcs = {
._semphr_give_from_isr = semphr_give_from_isr_wrapper,
._semphr_take = semphr_take_wrapper,
._semphr_give = semphr_give_wrapper,
._is_in_isr = xPortInIsrContext,
._is_in_isr = coex_is_in_isr_wrapper,
._malloc_internal = malloc_internal_wrapper,
._free = free,
._esp_timer_get_time = esp_timer_get_time,

View File

@ -302,6 +302,23 @@ esp_err_t esp_now_get_peer_num(esp_now_peer_num_t *num);
*/
esp_err_t esp_now_set_pmk(const uint8_t *pmk);
/**
* @brief Set esp_now wake window for sta_disconnected power management
*
* @param window how much microsecond would the chip keep waked each interval, vary from 0 to 65535
*
* @attention 1. Only when ESP_WIFI_STA_DISCONNECTED_PM_ENABLE is enabled, this configuration could work
* @attention 2. This configuration only work for station mode and disconnected status
* @attention 3. If more than one module has configured its wake_window, chip would choose the largest one to stay waked
* @attention 4. If the gap between interval and window is smaller than 5ms, the chip would keep waked all the time
* @attention 5. If never configured wake_window, the chip would keep waked at disconnected once it uses esp_now
*
* @return
* - ESP_OK : succeed
* - ESP_ERR_ESPNOW_NOT_INIT : ESPNOW is not initialized
*/
esp_err_t esp_now_set_wake_window(uint16_t window);
/**
* @}
*/

View File

@ -575,10 +575,28 @@ esp_err_t esp_wifi_set_tx_done_cb(wifi_tx_done_cb_t cb);
esp_err_t esp_wifi_internal_set_spp_amsdu(wifi_interface_t ifidx, bool spp_cap, bool spp_req);
/**
* @brief Apply WiFi sleep optimization parameters
*
*/
void esp_wifi_internal_optimize_wake_ahead_time(void);
* @brief Update WIFI light sleep default parameters
*
* @param min_freq_mhz: minimum frequency of DFS
* @param max_freq_mhz: maximum frequency of DFS
*/
void esp_wifi_internal_update_light_sleep_default_params(int min_freq_mhz, int max_freq_mhz);
/**
* @brief Set the delay time for wifi to enter the sleep state when light sleep
*
* @param return_to_sleep_delay: minimum timeout time for waiting to receive
* data, when no data is received during the timeout period,
* the wifi enters the sleep process.
*/
void esp_wifi_set_sleep_delay_time(uint32_t return_to_sleep_delay);
/**
* @brief Set wifi keep alive time
*
* @param keep_alive_time: keep alive time
*/
void esp_wifi_set_keep_alive_time(uint32_t keep_alive_time);
/**
* @brief Set FTM Report log level

View File

@ -115,6 +115,7 @@ typedef struct {
int beacon_max_len; /**< WiFi softAP maximum length of the beacon */
int mgmt_sbuf_num; /**< WiFi management short buffer number, the minimum value is 6, the maximum value is 32 */
uint64_t feature_caps; /**< Enables additional WiFi features and capabilities */
bool sta_disconnected_pm; /**< WiFi Power Management for station at disconnected status */
int magic; /**< WiFi init magic number, it should be the last field */
} wifi_init_config_t;
@ -124,7 +125,7 @@ typedef struct {
#define WIFI_STATIC_TX_BUFFER_NUM 0
#endif
#if (CONFIG_ESP32_SPIRAM_SUPPORT | CONFIG_ESP32S2_SPIRAM_SUPPORT)
#if (CONFIG_ESP32_SPIRAM_SUPPORT || CONFIG_ESP32S2_SPIRAM_SUPPORT || CONFIG_ESP32S3_SPIRAM_SUPPORT)
#define WIFI_CACHE_TX_BUFFER_NUM CONFIG_ESP32_WIFI_CACHE_TX_BUFFER_NUM
#else
#define WIFI_CACHE_TX_BUFFER_NUM 0
@ -201,6 +202,12 @@ extern uint64_t g_wifi_feature_caps;
#define WIFI_MGMT_SBUF_NUM 32
#endif
#if CONFIG_ESP_WIFI_STA_DISCONNECTED_PM_ENABLE
#define WIFI_STA_DISCONNECTED_PM_ENABLED true
#else
#define WIFI_STA_DISCONNECTED_PM_ENABLED false
#endif
#define CONFIG_FEATURE_WPA3_SAE_BIT (1<<0)
#define CONFIG_FEATURE_CACHE_TX_BUF_BIT (1<<1)
#define CONFIG_FEATURE_FTM_INITIATOR_BIT (1<<2)
@ -227,6 +234,7 @@ extern uint64_t g_wifi_feature_caps;
.beacon_max_len = WIFI_SOFTAP_BEACON_MAX_LEN, \
.mgmt_sbuf_num = WIFI_MGMT_SBUF_NUM, \
.feature_caps = g_wifi_feature_caps, \
.sta_disconnected_pm = WIFI_STA_DISCONNECTED_PM_ENABLED, \
.magic = WIFI_INIT_CONFIG_MAGIC\
};
@ -1169,6 +1177,47 @@ esp_err_t esp_wifi_set_rssi_threshold(int32_t rssi);
*/
esp_err_t esp_wifi_ftm_initiate_session(wifi_ftm_initiator_cfg_t *cfg);
/**
* @brief Enable or disable 11b rate of specified interface
*
* @attention 1. This API should be called after esp_wifi_init() and before esp_wifi_start().
* @attention 2. Only when really need to disable 11b rate call this API otherwise don't call this.
*
* @param ifx Interface to be configured.
* @param disable true means disable 11b rate while false means enable 11b rate.
*
* @return
* - ESP_OK: succeed
* - others: failed
*/
esp_err_t esp_wifi_config_11b_rate(wifi_interface_t ifx, bool disable);
/**
* @brief Config ESPNOW rate of specified interface
*
* @attention 1. This API should be called after esp_wifi_init() and before esp_wifi_start().
*
* @param ifx Interface to be configured.
* @param rate Only support 1M, 6M and MCS0_LGI
*
* @return
* - ESP_OK: succeed
* - others: failed
*/
esp_err_t esp_wifi_config_espnow_rate(wifi_interface_t ifx, wifi_phy_rate_t rate);
/**
* @brief Set interval for station to wake up periodically at disconnected.
*
* @attention 1. Only when ESP_WIFI_STA_DISCONNECTED_PM_ENABLE is enabled, this configuration could work
* @attention 2. This configuration only work for station mode and disconnected status
* @attention 3. This configuration would influence nothing until some module configure wake_window
* @attention 4. A sensible interval which is not too small is recommended (e.g. 100ms)
*
* @param interval how much micriosecond would the chip wake up, from 1 to 65535.
*/
esp_err_t esp_wifi_set_connectionless_wake_interval(uint16_t interval);
#ifdef __cplusplus
}
#endif

View File

@ -21,10 +21,6 @@ extern "C" {
#define ESP_CAL_DATA_CHECK_FAIL 1
#if CONFIG_MAC_BB_PD
#define MAC_BB_PD_MEM_SIZE (192*4)
#endif
/**
* @file phy.h
* @brief Declarations for functions provided by libphy.a
@ -80,6 +76,16 @@ void phy_close_rf(void);
void phy_xpd_tsens(void);
#endif
/**
* @brief Store and load PHY digital registers.
*
* @param backup_en if backup_en is true, store PHY digital registers to memory. Otherwise load PHY digital registers from memory
* @param mem_addr Memory address to store and load PHY digital registers
*
* @return memory size
*/
uint8_t phy_dig_reg_backup(bool backup_en, uint32_t *mem_addr);
#if CONFIG_MAC_BB_PD
/**
* @brief Store and load baseband registers.

View File

@ -74,6 +74,9 @@ static int64_t s_phy_rf_en_ts = 0;
/* PHY spinlock for libphy.a */
static DRAM_ATTR portMUX_TYPE s_phy_int_mux = portMUX_INITIALIZER_UNLOCKED;
/* Memory to store PHY digital registers */
static uint32_t* s_phy_digital_regs_mem = NULL;
#if CONFIG_MAC_BB_PD
uint32_t* s_mac_bb_pd_mem = NULL;
#endif
@ -198,6 +201,24 @@ IRAM_ATTR void esp_phy_common_clock_disable(void)
wifi_bt_common_module_disable();
}
static inline void phy_digital_regs_store(void)
{
if (s_phy_digital_regs_mem == NULL) {
s_phy_digital_regs_mem = (uint32_t *)malloc(SOC_PHY_DIG_REGS_MEM_SIZE);
}
if (s_phy_digital_regs_mem != NULL) {
phy_dig_reg_backup(true, s_phy_digital_regs_mem);
}
}
static inline void phy_digital_regs_load(void)
{
if (s_phy_digital_regs_mem != NULL) {
phy_dig_reg_backup(false, s_phy_digital_regs_mem);
}
}
void esp_phy_enable(void)
{
_lock_acquire(&s_phy_access_lock);
@ -217,6 +238,7 @@ void esp_phy_enable(void)
}
else {
phy_wakeup_init();
phy_digital_regs_load();
}
#if CONFIG_IDF_TARGET_ESP32
@ -240,6 +262,7 @@ void esp_phy_disable(void)
s_phy_access_ref--;
if (s_phy_access_ref == 0) {
phy_digital_regs_store();
// Disable PHY and RF.
phy_close_rf();
#if CONFIG_IDF_TARGET_ESP32C3
@ -263,7 +286,7 @@ void esp_mac_bb_pd_mem_init(void)
_lock_acquire(&s_phy_access_lock);
if (s_mac_bb_pd_mem == NULL) {
s_mac_bb_pd_mem = (uint32_t *)heap_caps_malloc(MAC_BB_PD_MEM_SIZE, MALLOC_CAP_DMA|MALLOC_CAP_INTERNAL);
s_mac_bb_pd_mem = (uint32_t *)heap_caps_malloc(SOC_MAC_BB_PD_MEM_SIZE, MALLOC_CAP_DMA|MALLOC_CAP_INTERNAL);
}
_lock_release(&s_phy_access_lock);

View File

@ -53,7 +53,7 @@ uint64_t g_wifi_feature_caps =
#if CONFIG_ESP32_WIFI_ENABLE_WPA3_SAE
CONFIG_FEATURE_WPA3_SAE_BIT |
#endif
#if (CONFIG_ESP32_SPIRAM_SUPPORT | CONFIG_ESP32S2_SPIRAM_SUPPORT)
#if (CONFIG_ESP32_SPIRAM_SUPPORT || CONFIG_ESP32S2_SPIRAM_SUPPORT || CONFIG_ESP32S3_SPIRAM_SUPPORT)
CONFIG_FEATURE_CACHE_TX_BUF_BIT |
#endif
#if CONFIG_ESP_WIFI_FTM_INITIATOR_SUPPORT
@ -133,7 +133,7 @@ esp_err_t esp_wifi_deinit(void)
if (esp_wifi_get_user_init_flag_internal()) {
ESP_LOGE(TAG, "Wi-Fi not stop");
return ESP_FAIL;
return ESP_ERR_WIFI_NOT_STOPPED;
}
esp_supplicant_deinit();
@ -151,6 +151,9 @@ esp_err_t esp_wifi_deinit(void)
esp_pm_unregister_skip_light_sleep_callback(esp_wifi_internal_is_tsf_active);
esp_pm_unregister_inform_out_light_sleep_overhead_callback(esp_wifi_internal_update_light_sleep_wake_ahead_time);
#endif
#if CONFIG_ESP_WIFI_SLP_IRAM_OPT
esp_pm_unregister_light_sleep_default_params_config_callback();
#endif
#endif
#if CONFIG_MAC_BB_PD
esp_unregister_mac_bb_pd_callback(pm_mac_sleep);
@ -186,7 +189,6 @@ static void esp_wifi_config_info(void)
#endif
#ifdef CONFIG_ESP_WIFI_SLP_IRAM_OPT
esp_wifi_internal_optimize_wake_ahead_time();
ESP_LOGI(TAG, "WiFi SLP IRAM OP enabled");
#endif
@ -222,6 +224,20 @@ esp_err_t esp_wifi_init(const wifi_init_config_t *config)
}
#endif
#if CONFIG_ESP_WIFI_SLP_IRAM_OPT
esp_pm_register_light_sleep_default_params_config_callback(esp_wifi_internal_update_light_sleep_default_params);
int min_freq_mhz = esp_pm_impl_get_cpu_freq(PM_MODE_LIGHT_SLEEP);
int max_freq_mhz = esp_pm_impl_get_cpu_freq(PM_MODE_CPU_MAX);
esp_wifi_internal_update_light_sleep_default_params(min_freq_mhz, max_freq_mhz);
uint32_t sleep_delay_us = CONFIG_ESP_WIFI_SLP_DEFAULT_MIN_ACTIVE_TIME * 1000;
esp_wifi_set_sleep_delay_time(sleep_delay_us);
uint32_t keep_alive_time_us = CONFIG_ESP_WIFI_SLP_DEFAULT_MAX_ACTIVE_TIME * 1000 * 1000;
esp_wifi_set_keep_alive_time(keep_alive_time_us);
#endif
#if SOC_WIFI_HW_TSF
esp_err_t ret = esp_pm_register_skip_light_sleep_callback(esp_wifi_internal_is_tsf_active);
if (ret != ESP_OK) {

View File

@ -66,7 +66,7 @@ static esp_err_t wifi_transmit(void *h, void *buffer, size_t len)
static esp_err_t wifi_transmit_wrap(void *h, void *buffer, size_t len, void *netstack_buf)
{
wifi_netif_driver_t driver = h;
#if (CONFIG_ESP32_SPIRAM_SUPPORT | CONFIG_ESP32S2_SPIRAM_SUPPORT)
#if (CONFIG_ESP32_SPIRAM_SUPPORT || CONFIG_ESP32S2_SPIRAM_SUPPORT || CONFIG_ESP32S3_SPIRAM_SUPPORT)
return esp_wifi_internal_tx_by_ref(driver->wifi_if, buffer, len, netstack_buf);
#else
return esp_wifi_internal_tx(driver->wifi_if, buffer, len);

View File

@ -18,6 +18,10 @@
#include "esp_err.h"
#include "esp_private/panic_internal.h"
#ifdef __cplusplus
extern "C" {
#endif
/**************************************************************************************/
/******************************** EXCEPTION MODE API **********************************/
/**************************************************************************************/
@ -85,4 +89,8 @@ void esp_core_dump_to_uart(panic_info_t *info);
*/
esp_err_t esp_core_dump_image_get(size_t* out_addr, size_t *out_size);
#ifdef __cplusplus
}
#endif
#endif

View File

@ -127,6 +127,9 @@ void vMBPortExitCritical(void);
#define MB_PORT_CHECK_EVENT( event, mask ) ( event & mask )
#define MB_PORT_CLEAR_EVENT( event, mask ) do { event &= ~mask; } while(0)
#define MB_PORT_PARITY_GET(parity) ((parity != UART_PARITY_DISABLE) ? \
((parity == UART_PARITY_ODD) ? MB_PAR_ODD : MB_PAR_EVEN) : MB_PAR_NONE)
// Legacy Modbus logging function
#if MB_TCP_DEBUG
void vMBPortLog( eMBPortLogLevel eLevel, const CHAR * szModule,

View File

@ -184,7 +184,6 @@ BOOL xMBPortSerialInit(UCHAR ucPORT, ULONG ulBaudRate,
UCHAR ucDataBits, eMBParity eParity)
{
esp_err_t xErr = ESP_OK;
MB_PORT_CHECK((eParity <= MB_PAR_EVEN), FALSE, "mb serial set parity failure.");
// Set communication port number
ucUartNumber = ucPORT;
// Configure serial communication parameters
@ -200,6 +199,9 @@ BOOL xMBPortSerialInit(UCHAR ucPORT, ULONG ulBaudRate,
case MB_PAR_EVEN:
ucParity = UART_PARITY_EVEN;
break;
default:
ESP_LOGE(TAG, "Incorrect parity option: %d", eParity);
return FALSE;
}
switch(ucDataBits){
case 5:

View File

@ -180,7 +180,6 @@ static void vUartTask(void* pvParameters)
BOOL xMBMasterPortSerialInit( UCHAR ucPORT, ULONG ulBaudRate, UCHAR ucDataBits, eMBParity eParity )
{
esp_err_t xErr = ESP_OK;
MB_PORT_CHECK((eParity <= MB_PAR_EVEN), FALSE, "mb serial set parity failure.");
// Set communication port number
ucUartNumber = ucPORT;
// Configure serial communication parameters
@ -196,6 +195,9 @@ BOOL xMBMasterPortSerialInit( UCHAR ucPORT, ULONG ulBaudRate, UCHAR ucDataBits,
case MB_PAR_EVEN:
ucParity = UART_PARITY_EVEN;
break;
default:
ESP_LOGE(TAG, "Incorrect parity option: %d", eParity);
return FALSE;
}
switch(ucDataBits){
case 5:

View File

@ -83,7 +83,7 @@ static esp_err_t mbc_serial_master_setup(void* comm_info)
(uint32_t)comm_info_ptr->mode);
MB_MASTER_CHECK((comm_info_ptr->port <= UART_NUM_MAX), ESP_ERR_INVALID_ARG,
"mb wrong port to set = (0x%x).", (uint32_t)comm_info_ptr->port);
MB_MASTER_CHECK((comm_info_ptr->parity <= UART_PARITY_EVEN), ESP_ERR_INVALID_ARG,
MB_MASTER_CHECK((comm_info_ptr->parity <= UART_PARITY_ODD), ESP_ERR_INVALID_ARG,
"mb wrong parity option = (0x%x).", (uint32_t)comm_info_ptr->parity);
// Save the communication options
mbm_opts->mbm_comm = *(mb_communication_info_t*)comm_info_ptr;
@ -102,7 +102,9 @@ static esp_err_t mbc_serial_master_start(void)
// Initialize Modbus stack using mbcontroller parameters
status = eMBMasterSerialInit((eMBMode)comm_info->mode, (UCHAR)comm_info->port,
(ULONG)comm_info->baudrate, (eMBParity)comm_info->parity);
(ULONG)comm_info->baudrate,
MB_PORT_PARITY_GET(comm_info->parity));
MB_MASTER_CHECK((status == MB_ENOERR), ESP_ERR_INVALID_STATE,
"mb stack initialization failure, eMBInit() returns (0x%x).", status);
status = eMBMasterEnable();

View File

@ -75,7 +75,7 @@ static esp_err_t mbc_serial_slave_setup(void* comm_info)
(uint32_t)comm_settings->slave_addr);
MB_SLAVE_CHECK((comm_settings->port < UART_NUM_MAX), ESP_ERR_INVALID_ARG,
"mb wrong port to set = (0x%x).", (uint32_t)comm_settings->port);
MB_SLAVE_CHECK((comm_settings->parity <= UART_PARITY_EVEN), ESP_ERR_INVALID_ARG,
MB_SLAVE_CHECK((comm_settings->parity <= UART_PARITY_ODD), ESP_ERR_INVALID_ARG,
"mb wrong parity option = (0x%x).", (uint32_t)comm_settings->parity);
// Set communication options of the controller
@ -91,12 +91,15 @@ static esp_err_t mbc_serial_slave_start(void)
"Slave interface is not correctly initialized.");
mb_slave_options_t* mbs_opts = &mbs_interface_ptr->opts;
eMBErrorCode status = MB_EIO;
const mb_communication_info_t* comm_info = (mb_communication_info_t*)&mbs_opts->mbs_comm;
// Initialize Modbus stack using mbcontroller parameters
status = eMBInit((eMBMode)mbs_opts->mbs_comm.mode,
(UCHAR)mbs_opts->mbs_comm.slave_addr,
(UCHAR)mbs_opts->mbs_comm.port,
(ULONG)mbs_opts->mbs_comm.baudrate,
(eMBParity)mbs_opts->mbs_comm.parity);
status = eMBInit((eMBMode)comm_info->mode,
(UCHAR)comm_info->slave_addr,
(UCHAR)comm_info->port,
(ULONG)comm_info->baudrate,
MB_PORT_PARITY_GET(comm_info->parity));
MB_SLAVE_CHECK((status == MB_ENOERR), ESP_ERR_INVALID_STATE,
"mb stack initialization failure, eMBInit() returns (0x%x).", status);
status = eMBEnable();

View File

@ -294,6 +294,9 @@ void vApplicationSleep( TickType_t xExpectedIdleTime );
#define portVALID_TCB_MEM(ptr) esp_ptr_byte_accessible(ptr)
#define portVALID_STACK_MEM(ptr) esp_ptr_byte_accessible(ptr)
/* Get tick rate per second */
uint32_t xPortGetTickRateHz(void);
// configASSERT_2 if requested
#if configASSERT_2
#include <stdio.h>

View File

@ -81,6 +81,7 @@
#include "sdkconfig.h"
#include "soc/soc_caps.h"
#include "soc/periph_defs.h"
#include "soc/system_reg.h"
#include "hal/systimer_hal.h"
@ -177,6 +178,24 @@ void prvTaskExitError(void)
void vPortClearInterruptMask(int mask)
{
REG_WRITE(INTERRUPT_CORE0_CPU_INT_THRESH_REG, mask);
/**
* The delay between the moment we unmask the interrupt threshold register
* and the moment the potential requested interrupt is triggered is not
* null: up to three machine cycles/instructions can be executed.
*
* When compilation size optimization is enabled, this function and its
* callers returning void will have NO epilogue, thus the instruction
* following these calls will be executed.
*
* If the requested interrupt is a context switch to a higher priority
* task then the one currently running, we MUST NOT execute any instruction
* before the interrupt effectively happens.
* In order to prevent this, force this routine to have a 3-instruction
* delay before exiting.
*/
asm volatile ( "nop" );
asm volatile ( "nop" );
asm volatile ( "nop" );
}
/* Set interrupt mask and return current interrupt enable register */
@ -187,6 +206,17 @@ int vPortSetInterruptMask(void)
ret = REG_READ(INTERRUPT_CORE0_CPU_INT_THRESH_REG);
REG_WRITE(INTERRUPT_CORE0_CPU_INT_THRESH_REG, RVHAL_EXCM_LEVEL);
RV_SET_CSR(mstatus, old_mstatus & MSTATUS_MIE);
/**
* In theory, this function should not return immediately as there is a
* delay between the moment we mask the interrupt threshold register and
* the moment a potential lower-priority interrupt is triggered (as said
* above), it should have a delay of 2 machine cycles/instructions.
*
* However, in practice, this function has an epilogue of one instruction,
* thus the instruction masking the interrupt threshold register is
* followed by two instructions: `ret` and `csrrs` (RV_SET_CSR).
* That's why we don't need any additional nop instructions here.
*/
return ret;
}
@ -302,11 +332,17 @@ void vPortYield(void)
}
#define STACK_WATCH_AREA_SIZE 32
#define STACK_WATCH_POINT_NUMBER (SOC_CPU_WATCHPOINTS_NUM - 1)
void vPortSetStackWatchpoint(void *pxStackStart)
{
uint32_t addr = (uint32_t)pxStackStart;
addr = (addr + (STACK_WATCH_AREA_SIZE - 1)) & (~(STACK_WATCH_AREA_SIZE - 1));
esp_set_watchpoint(7, (char *)addr, STACK_WATCH_AREA_SIZE, ESP_WATCHPOINT_STORE);
esp_set_watchpoint(STACK_WATCH_POINT_NUMBER, (char *)addr, STACK_WATCH_AREA_SIZE, ESP_WATCHPOINT_STORE);
}
uint32_t xPortGetTickRateHz(void) {
return (uint32_t)configTICK_RATE_HZ;
}
BaseType_t xPortInIsrContext(void)

View File

@ -416,6 +416,9 @@ void vPortAssertIfInISR(void)
configASSERT(xPortInIsrContext());
}
#define STACK_WATCH_AREA_SIZE 32
#define STACK_WATCH_POINT_NUMBER (SOC_CPU_WATCHPOINTS_NUM - 1)
void vPortSetStackWatchpoint( void* pxStackStart ) {
//Set watchpoint 1 to watch the last 32 bytes of the stack.
//Unfortunately, the Xtensa watchpoints can't set a watchpoint on a random [base - base+n] region because
@ -425,7 +428,7 @@ void vPortSetStackWatchpoint( void* pxStackStart ) {
//This way, we make sure we trigger before/when the stack canary is corrupted, not after.
int addr=(int)pxStackStart;
addr=(addr+31)&(~31);
esp_set_watchpoint(1, (char*)addr, 32, ESP_WATCHPOINT_STORE);
esp_set_watchpoint(STACK_WATCH_POINT_NUMBER, (char*)addr, 32, ESP_WATCHPOINT_STORE);
}
uint32_t xPortGetTickRateHz(void) {

View File

@ -24,8 +24,6 @@
#define TICKS_TO_MS(x) (((x)*1000)/TICK_RATE)
#define REF_TO_ROUND_MS(x) (((x)+500)/1000)
#if !TEMPORARY_DISABLED_FOR_TARGETS(ESP32S2)
static SemaphoreHandle_t task_delete_semphr;
static void delaying_task(void* arg)
@ -73,5 +71,3 @@ TEST_CASE("Test vTaskDelayUntil", "[freertos]")
vSemaphoreDelete(task_delete_semphr);
ref_clock_deinit();
}
#endif // CONFIG_IDF_TARGET_ESP32S2

View File

@ -89,9 +89,6 @@ TEST_CASE("Suspend/resume task on other core", "[freertos]")
}
#endif
#if !TEMPORARY_DISABLED_FOR_TARGETS(ESP32C3)
// TODO ESP32C3 IDF-2588
/* Task suspends itself, then sets a flag and deletes itself */
static void task_suspend_self(void *vp_resumed)
{
@ -118,7 +115,6 @@ TEST_CASE("Suspend the current running task", "[freertos]")
// Shouldn't need any delay here, as task should resume on this CPU immediately
TEST_ASSERT_TRUE(resumed);
}
#endif // !TEMPORARY_DISABLED_FOR_TARGETS(ESP32C3)
volatile bool timer_isr_fired;

View File

@ -281,10 +281,7 @@ void adc_hal_digi_stop(adc_dma_hal_context_t *adc_dma_ctx, adc_dma_hal_config_t
void adc_hal_digi_init(adc_dma_hal_context_t *adc_dma_ctx, adc_dma_hal_config_t *dma_config)
{
adc_dma_ctx->dev = &GDMA;
gdma_ll_enable_clock(adc_dma_ctx->dev, true);
gdma_ll_clear_interrupt_status(adc_dma_ctx->dev, dma_config->dma_chan, UINT32_MAX);
gdma_ll_rx_connect_to_periph(adc_dma_ctx->dev, dma_config->dma_chan, SOC_GDMA_TRIG_PERIPH_ADC0);
adc_ll_adc1_onetime_sample_enable(false);
adc_ll_adc2_onetime_sample_enable(false);
}

View File

@ -946,9 +946,10 @@ static inline void spi_ll_enable_int(spi_dev_t *hw)
/**
* Reset RX DMA which stores the data received from a peripheral into RAM.
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
*/
static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in)
static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in, uint32_t channel)
{
//Reset RX DMA peripheral
dma_in->dma_conf.in_rst = 1;
@ -958,10 +959,11 @@ static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in)
/**
* Start RX DMA.
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param addr Address of the beginning DMA descriptor.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param addr Address of the beginning DMA descriptor.
*/
static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, lldesc_t *addr)
static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, uint32_t channel, lldesc_t *addr)
{
dma_in->dma_in_link.addr = (int) addr & 0xFFFFF;
dma_in->dma_in_link.start = 1;
@ -971,9 +973,10 @@ static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, lldesc_t *addr)
* Enable DMA RX channel burst for data
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_rx_enable_burst_data(spi_dma_dev_t *dma_in, bool enable)
static inline void spi_dma_ll_rx_enable_burst_data(spi_dma_dev_t *dma_in, uint32_t channel, bool enable)
{
//This is not supported in esp32
}
@ -982,9 +985,10 @@ static inline void spi_dma_ll_rx_enable_burst_data(spi_dma_dev_t *dma_in, bool e
* Enable DMA RX channel burst for descriptor
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_rx_enable_burst_desc(spi_dma_dev_t *dma_in, bool enable)
static inline void spi_dma_ll_rx_enable_burst_desc(spi_dma_dev_t *dma_in, uint32_t channel, bool enable)
{
dma_in->dma_conf.indscr_burst_en = enable;
}
@ -993,8 +997,9 @@ static inline void spi_dma_ll_rx_enable_burst_desc(spi_dma_dev_t *dma_in, bool e
* Reset TX DMA which transmits the data from RAM to a peripheral.
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
*/
static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out)
static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out, uint32_t channel)
{
//Reset TX DMA peripheral
dma_out->dma_conf.out_rst = 1;
@ -1005,9 +1010,10 @@ static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out)
* Start TX DMA.
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param addr Address of the beginning DMA descriptor.
*/
static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, lldesc_t *addr)
static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, uint32_t channel, lldesc_t *addr)
{
dma_out->dma_out_link.addr = (int) addr & 0xFFFFF;
dma_out->dma_out_link.start = 1;
@ -1017,9 +1023,10 @@ static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, lldesc_t *addr)
* Enable DMA TX channel burst for data
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.out_data_burst_en = enable;
}
@ -1028,9 +1035,10 @@ static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, bool
* Enable DMA TX channel burst for descriptor
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.outdscr_burst_en = enable;
}
@ -1039,9 +1047,10 @@ static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, bool
* Configuration of OUT EOF flag generation way
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable 1: when dma pop all data from fifo 0:when ahb push all data to fifo.
*/
static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.out_eof_mode = enable;
}
@ -1050,9 +1059,10 @@ static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, boo
* Enable automatic outlink-writeback
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_enable_out_auto_wrback(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_enable_out_auto_wrback(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
//does not configure it in ESP32
}

View File

@ -26,6 +26,7 @@
#include "soc/gpio_periph.h"
#include "soc/rtc_cntl_reg.h"
#include "hal/gpio_types.h"
#include "stdlib.h"
#ifdef __cplusplus
extern "C" {
@ -343,7 +344,11 @@ static inline void gpio_ll_deep_sleep_hold_dis(gpio_dev_t *hw)
*/
static inline void gpio_ll_hold_en(gpio_dev_t *hw, gpio_num_t gpio_num)
{
SET_PERI_REG_MASK(RTC_CNTL_DIG_PAD_HOLD_REG, GPIO_HOLD_MASK[gpio_num]);
if (gpio_num <= GPIO_NUM_5) {
REG_SET_BIT(RTC_CNTL_PAD_HOLD_REG, BIT(gpio_num));
} else {
SET_PERI_REG_MASK(RTC_CNTL_DIG_PAD_HOLD_REG, GPIO_HOLD_MASK[gpio_num]);
}
}
/**
@ -354,7 +359,11 @@ static inline void gpio_ll_hold_en(gpio_dev_t *hw, gpio_num_t gpio_num)
*/
static inline void gpio_ll_hold_dis(gpio_dev_t *hw, gpio_num_t gpio_num)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_PAD_HOLD_REG, GPIO_HOLD_MASK[gpio_num]);
if (gpio_num <= GPIO_NUM_5) {
REG_CLR_BIT(RTC_CNTL_PAD_HOLD_REG, BIT(gpio_num));
} else {
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_PAD_HOLD_REG, GPIO_HOLD_MASK[gpio_num]);
}
}
/**
@ -390,11 +399,13 @@ static inline void gpio_ll_force_hold_all(gpio_dev_t *hw)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_UNHOLD);
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
SET_PERI_REG_MASK(RTC_CNTL_PWC_REG, RTC_CNTL_PAD_FORCE_HOLD_M);
}
static inline void gpio_ll_force_unhold_all(gpio_dev_t *hw)
static inline void gpio_ll_force_unhold_all(void)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
CLEAR_PERI_REG_MASK(RTC_CNTL_PWC_REG, RTC_CNTL_PAD_FORCE_HOLD_M);
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_UNHOLD);
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_CLR_DG_PAD_AUTOHOLD);
}
@ -509,6 +520,41 @@ static inline void gpio_ll_sleep_output_enable(gpio_dev_t *hw, gpio_num_t gpio_n
PIN_SLP_OUTPUT_ENABLE(GPIO_PIN_MUX_REG[gpio_num]);
}
/**
* @brief Enable GPIO deep-sleep wake-up function.
*
* @param hw Peripheral GPIO hardware instance address.
* @param gpio_num GPIO number.
* @param intr_type GPIO wake-up type. Only GPIO_INTR_LOW_LEVEL or GPIO_INTR_HIGH_LEVEL can be used.
*/
static inline void gpio_ll_deepsleep_wakeup_enable(gpio_dev_t *hw, gpio_num_t gpio_num, gpio_int_type_t intr_type)
{
if (gpio_num > GPIO_NUM_5) {
abort(); // gpio lager than 5 doesn't support.
}
REG_SET_BIT(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_PIN_CLK_GATE);
REG_SET_BIT(RTC_CNTL_EXT_WAKEUP_CONF_REG, RTC_CNTL_GPIO_WAKEUP_FILTER);
SET_PERI_REG_MASK(RTC_CNTL_GPIO_WAKEUP_REG, 1 << (RTC_CNTL_GPIO_PIN0_WAKEUP_ENABLE_S - gpio_num));
uint32_t reg = REG_READ(RTC_CNTL_GPIO_WAKEUP_REG);
reg &= (~(RTC_CNTL_GPIO_PIN0_INT_TYPE_V << (RTC_CNTL_GPIO_PIN0_INT_TYPE_S - gpio_num * 3)));
reg |= (intr_type << (RTC_CNTL_GPIO_PIN0_INT_TYPE_S - gpio_num * 3));
REG_WRITE(RTC_CNTL_GPIO_WAKEUP_REG, reg);
}
/**
* @brief Disable GPIO deep-sleep wake-up function.
*
* @param hw Peripheral GPIO hardware instance address.
* @param gpio_num GPIO number
*/
static inline void gpio_ll_deepsleep_wakeup_disable(gpio_dev_t *hw, gpio_num_t gpio_num)
{
if (gpio_num > GPIO_NUM_5) {
abort(); // gpio lager than 5 doesn't support.
}
CLEAR_PERI_REG_MASK(RTC_CNTL_GPIO_WAKEUP_REG, 1 << (RTC_CNTL_GPIO_PIN0_WAKEUP_ENABLE_S - gpio_num));
CLEAR_PERI_REG_MASK(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_PIN0_INT_TYPE_S - gpio_num * 3);
}
#ifdef __cplusplus
}

View File

@ -32,19 +32,18 @@ static inline void rtc_cntl_ll_set_wakeup_timer(uint64_t t)
SET_PERI_REG_MASK(RTC_CNTL_SLP_TIMER1_REG, RTC_CNTL_MAIN_TIMER_ALARM_EN_M);
}
static inline uint32_t rtc_cntl_ll_ext1_get_wakeup_pins(void)
static inline uint32_t rtc_cntl_ll_gpio_get_wakeup_pins(void)
{
abort(); // ESP32-C3 TODO IDF-2106
return GET_PERI_REG_MASK(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_WAKEUP_STATUS);
}
static inline void rtc_cntl_ll_ext1_set_wakeup_pins(uint32_t mask, int mode)
static inline void rtc_cntl_ll_gpio_set_wakeup_pins(void)
{
abort(); // ESP32-C3 TODO IDF-2106
REG_CLR_BIT(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_WAKEUP_STATUS_CLR);
}
static inline void rtc_cntl_ll_ext1_clear_wakeup_pins(void)
static inline void rtc_cntl_ll_gpio_clear_wakeup_pins(void)
{
abort(); // ESP32-C3 TODO IDF-2106
REG_SET_BIT(RTC_CNTL_GPIO_WAKEUP_REG, RTC_CNTL_GPIO_WAKEUP_STATUS_CLR);
}

View File

@ -1,145 +0,0 @@
// Copyright 2015-2020 Espressif Systems (Shanghai) PTE LTD
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*******************************************************************************
* NOTICE
* The ll is not public api, don't use in application code.
* See readme.md in soc/include/hal/readme.md
******************************************************************************/
/* Note: ESP32-C3 does not have a full RTC_IO module, this LL only controls
hold/wakeup/32kHz crystal functions for IOs */
#pragma once
#include <stdlib.h>
#include "soc/rtc_cntl_reg.h"
#include "hal/rtc_io_types.h"
#include "hal/gpio_types.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef enum {
RTCIO_FUNC_RTC = 0x0, /*!< The pin controled by RTC module. */
RTCIO_FUNC_DIGITAL = 0x1, /*!< The pin controlled by DIGITAL module. */
} rtcio_ll_func_t;
typedef enum {
RTCIO_WAKEUP_DISABLE = 0, /*!< Disable GPIO interrupt */
RTCIO_WAKEUP_LOW_LEVEL = 0x4, /*!< GPIO interrupt type : input low level trigger */
RTCIO_WAKEUP_HIGH_LEVEL = 0x5, /*!< GPIO interrupt type : input high level trigger */
} rtcio_ll_wake_type_t;
typedef enum {
RTCIO_OUTPUT_NORMAL = 0, /*!< RTCIO output mode is normal. */
RTCIO_OUTPUT_OD = 0x1, /*!< RTCIO output mode is open-drain. */
} rtcio_ll_out_mode_t;
/**
* @brief Select the rtcio function.
*
* @note The RTC function must be selected before the pad analog function is enabled.
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
* @param func Select pin function.
*/
static inline void rtcio_ll_function_select(int rtcio_num, rtcio_ll_func_t func)
{
abort(); // TODO ESP32-C3 IDF-2407
}
/**
* Enable force hold function for RTC IO pad.
*
* Enabling HOLD function will cause the pad to lock current status, such as,
* input/output enable, input/output value, function, drive strength values.
* This function is useful when going into light or deep sleep mode to prevent
* the pin configuration from changing.
*
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
*/
static inline void rtcio_ll_force_hold_enable(int rtcio_num)
{
abort(); // TODO ESP32-C3 IDF-2407
}
/**
* Disable hold function on an RTC IO pad
*
* @note If disable the pad hold, the status of pad maybe changed in sleep mode.
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
*/
static inline void rtcio_ll_force_hold_disable(int rtcio_num)
{
abort(); // TODO ESP32-C3 IDF-2407
}
/**
* Enable force hold function for RTC IO pad.
*
* Enabling HOLD function will cause the pad to lock current status, such as,
* input/output enable, input/output value, function, drive strength values.
* This function is useful when going into light or deep sleep mode to prevent
* the pin configuration from changing.
*
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
*/
static inline void rtcio_ll_force_hold_all(void)
{
abort(); // TODO ESP32-C3 IDF-2407
}
/**
* Disable hold function on an RTC IO pad
*
* @note If disable the pad hold, the status of pad maybe changed in sleep mode.
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
*/
static inline void rtcio_ll_force_unhold_all(void)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_PWC_REG, RTC_CNTL_PAD_FORCE_HOLD_M);
}
/**
* Enable wakeup function and set wakeup type from light sleep status for rtcio.
*
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
* @param type Wakeup on high level or low level.
*/
static inline void rtcio_ll_wakeup_enable(int rtcio_num, rtcio_ll_wake_type_t type)
{
abort(); // TODO ESP32-C3 IDF-2407
}
/**
* Disable wakeup function from light sleep status for rtcio.
*
* @param rtcio_num The index of rtcio. 0 ~ MAX(rtcio).
*/
static inline void rtcio_ll_wakeup_disable(int rtcio_num)
{
abort(); // TODO ESP32-C3 IDF-2407
}
static inline void rtcio_ll_ext0_set_wakeup_pin(int rtcio_num, int level)
{
abort(); // TODO ESP32-C3 IDF-2106 IDF-2407
}
#ifdef __cplusplus
}
#endif

View File

@ -145,20 +145,24 @@ static inline uint32_t uart_ll_get_sclk_freq(uart_dev_t *hw)
* @brief Configure the baud-rate.
*
* @param hw Beginning address of the peripheral registers.
* @param baud The baud rate to be set. When the source clock is APB, the max baud rate is `UART_LL_BITRATE_MAX`
* @param baud The baud rate to be set.
*
* @return None
*/
static inline void uart_ll_set_baudrate(uart_dev_t *hw, uint32_t baud)
{
const int sclk_div = 1;
#define DIV_UP(a, b) (((a) + (b) - 1) / (b))
uint32_t sclk_freq = uart_ll_get_sclk_freq(hw);
uint32_t clk_div = ((sclk_freq) << 4) / baud;
const uint32_t max_div = BIT(12) - 1; // UART divider integer part only has 12 bits
int sclk_div = DIV_UP(sclk_freq, max_div * baud);
uint32_t clk_div = ((sclk_freq) << 4) / (baud * sclk_div);
// The baud rate configuration register is divided into
// an integer part and a fractional part.
hw->clk_div.div_int = clk_div >> 4;
hw->clk_div.div_frag = clk_div & 0xf;
hw->clk_conf.sclk_div_num = sclk_div - 1;//7;//255;
hw->clk_conf.sclk_div_num = sclk_div - 1;
#undef DIV_UP
}
/**
@ -172,7 +176,7 @@ static inline uint32_t uart_ll_get_baudrate(uart_dev_t *hw)
{
uint32_t sclk_freq = uart_ll_get_sclk_freq(hw);
typeof(hw->clk_div) div_reg = hw->clk_div;
return ((sclk_freq << 4)) / ((div_reg.div_int << 4) | div_reg.div_frag);
return ((sclk_freq << 4)) / (((div_reg.div_int << 4) | div_reg.div_frag) * (hw->clk_conf.sclk_div_num + 1));
}
/**

View File

@ -413,7 +413,7 @@ static inline void gpio_ll_force_hold_all(gpio_dev_t *hw)
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
}
static inline void gpio_ll_force_unhold_all(gpio_dev_t *hw)
static inline void gpio_ll_force_unhold_all(void)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_UNHOLD);

View File

@ -1077,13 +1077,14 @@ static inline uint32_t spi_ll_slave_hd_get_last_addr(spi_dev_t* hw)
* RX DMA (Peripherals->DMA->RAM)
* TX DMA (RAM->DMA->Peripherals)
*----------------------------------------------------------------------------*/
//---------------------------------------------------RX-------------------------------------------------//
/**
* Reset RX DMA which stores the data received from a peripheral into RAM.
*
* @param hw Beginning address of the peripheral registers.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
*/
static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in)
static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in, uint32_t channel)
{
//Reset RX DMA peripheral
dma_in->dma_in_link.dma_rx_ena = 0;
@ -1096,10 +1097,11 @@ static inline void spi_dma_ll_rx_reset(spi_dma_dev_t *dma_in)
/**
* Start RX DMA.
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param addr Address of the beginning DMA descriptor.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param addr Address of the beginning DMA descriptor.
*/
static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, lldesc_t *addr)
static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, uint32_t channel, lldesc_t *addr)
{
dma_in->dma_in_link.addr = (int) addr & 0xFFFFF;
dma_in->dma_in_link.start = 1;
@ -1109,31 +1111,46 @@ static inline void spi_dma_ll_rx_start(spi_dma_dev_t *dma_in, lldesc_t *addr)
* Enable DMA RX channel burst for data
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_rx_enable_burst_data(spi_dma_dev_t *dma_in, bool enable)
static inline void spi_dma_ll_rx_enable_burst_data(spi_dma_dev_t *dma_in, uint32_t channel, bool enable)
{
//This is not supported in esp32s2
}
/**
* Enable DMA TX channel burst for descriptor
* Enable DMA RX channel burst for descriptor
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_rx_enable_burst_desc(spi_dma_dev_t *dma_in, bool enable)
static inline void spi_dma_ll_rx_enable_burst_desc(spi_dma_dev_t *dma_in, uint32_t channel, bool enable)
{
dma_in->dma_conf.indscr_burst_en = enable;
}
/**
* Get the last inlink descriptor address when DMA produces in_suc_eof interrupt
*
* @param dma_in Beginning address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
* @param channel DMA channel, for chip version compatibility, not used.
* @return The address
*/
static inline uint32_t spi_dma_ll_get_in_suc_eof_desc_addr(spi_dma_dev_t *dma_in, uint32_t channel)
{
return dma_in->dma_in_suc_eof_des_addr;
}
//---------------------------------------------------TX-------------------------------------------------//
/**
* Reset TX DMA which transmits the data from RAM to a peripheral.
*
* @param hw Beginning address of the peripheral registers.
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
*/
static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out)
static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out, uint32_t channel)
{
//Reset TX DMA peripheral
dma_out->dma_conf.out_rst = 1;
@ -1144,9 +1161,10 @@ static inline void spi_dma_ll_tx_reset(spi_dma_dev_t *dma_out)
* Start TX DMA.
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param addr Address of the beginning DMA descriptor.
* @param channel DMA channel, for chip version compatibility, not used.
* @param addr Address of the beginning DMA descriptor.
*/
static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, lldesc_t *addr)
static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, uint32_t channel, lldesc_t *addr)
{
dma_out->dma_out_link.addr = (int) addr & 0xFFFFF;
dma_out->dma_out_link.start = 1;
@ -1156,9 +1174,10 @@ static inline void spi_dma_ll_tx_start(spi_dma_dev_t *dma_out, lldesc_t *addr)
* Enable DMA TX channel burst for data
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.out_data_burst_en = enable;
}
@ -1167,9 +1186,10 @@ static inline void spi_dma_ll_tx_enable_burst_data(spi_dma_dev_t *dma_out, bool
* Enable DMA TX channel burst for descriptor
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.outdscr_burst_en = enable;
}
@ -1178,9 +1198,10 @@ static inline void spi_dma_ll_tx_enable_burst_desc(spi_dma_dev_t *dma_out, bool
* Configuration of OUT EOF flag generation way
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable 1: when dma pop all data from fifo 0:when ahb push all data to fifo.
*/
static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.out_eof_mode = enable;
}
@ -1189,19 +1210,32 @@ static inline void spi_dma_ll_set_out_eof_generation(spi_dma_dev_t *dma_out, boo
* Enable automatic outlink-writeback
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @param enable True to enable, false to disable
*/
static inline void spi_dma_ll_enable_out_auto_wrback(spi_dma_dev_t *dma_out, bool enable)
static inline void spi_dma_ll_enable_out_auto_wrback(spi_dma_dev_t *dma_out, uint32_t channel, bool enable)
{
dma_out->dma_conf.out_auto_wrback = enable;
}
static inline void spi_dma_ll_rx_restart(spi_dma_dev_t *dma_in)
/**
* Get the last outlink descriptor address when DMA produces out_eof intrrupt
*
* @param dma_out Beginning address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
* @param channel DMA channel, for chip version compatibility, not used.
* @return The address
*/
static inline uint32_t spi_dma_ll_get_out_eof_desc_addr(spi_dma_dev_t *dma_out, uint32_t channel)
{
return dma_out->dma_out_eof_des_addr;
}
static inline void spi_dma_ll_rx_restart(spi_dma_dev_t *dma_in, uint32_t channel)
{
dma_in->dma_in_link.restart = 1;
}
static inline void spi_dma_ll_tx_restart(spi_dma_dev_t *dma_out)
static inline void spi_dma_ll_tx_restart(spi_dma_dev_t *dma_out, uint32_t channel)
{
dma_out->dma_out_link.restart = 1;
}

View File

@ -413,7 +413,7 @@ static inline void gpio_ll_force_hold_all(gpio_dev_t *hw)
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
}
static inline void gpio_ll_force_unhold_all(gpio_dev_t *hw)
static inline void gpio_ll_force_unhold_all(void)
{
CLEAR_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_HOLD);
SET_PERI_REG_MASK(RTC_CNTL_DIG_ISO_REG, RTC_CNTL_DG_PAD_FORCE_UNHOLD);

View File

@ -133,7 +133,6 @@ static inline void spimem_flash_ll_resume(spi_mem_dev_t *dev)
*/
static inline void spimem_flash_ll_auto_suspend_init(spi_mem_dev_t *dev, bool auto_sus)
{
dev->user.usr_dummy_idle = 1;// MUST SET 1, to avoid missing Resume
dev->flash_sus_ctrl.flash_pes_en = auto_sus; // enable Flash Auto-Suspend.
}

View File

@ -129,20 +129,24 @@ static inline uint32_t uart_ll_get_sclk_freq(uart_dev_t *hw)
* @brief Configure the baud-rate.
*
* @param hw Beginning address of the peripheral registers.
* @param baud The baud rate to be set. When the source clock is APB, the max baud rate is `UART_LL_BITRATE_MAX`
* @param baud The baud rate to be set.
*
* @return None
*/
static inline void uart_ll_set_baudrate(uart_dev_t *hw, uint32_t baud)
{
uint32_t sclk_freq, clk_div;
#define DIV_UP(a, b) (((a) + (b) - 1) / (b))
uint32_t sclk_freq = uart_ll_get_sclk_freq(hw);
const uint32_t max_div = BIT(12) - 1; // UART divider integer part only has 12 bits
int sclk_div = DIV_UP(sclk_freq, max_div * baud);
sclk_freq = uart_ll_get_sclk_freq(hw);
clk_div = ((sclk_freq) << 4) / baud;
uint32_t clk_div = ((sclk_freq) << 4) / (baud * sclk_div);
// The baud rate configuration register is divided into
// an integer part and a fractional part.
hw->clk_div.div_int = clk_div >> 4;
hw->clk_div.div_frag = clk_div & 0xf;
hw->clk_conf.sclk_div_num = sclk_div - 1;
#undef DIV_UP
}
/**
@ -156,7 +160,7 @@ static inline uint32_t uart_ll_get_baudrate(uart_dev_t *hw)
{
uint32_t sclk_freq = uart_ll_get_sclk_freq(hw);
typeof(hw->clk_div) div_reg = hw->clk_div;
return ((sclk_freq << 4)) / ((div_reg.div_int << 4) | div_reg.div_frag);
return ((sclk_freq << 4)) / (((div_reg.div_int << 4) | div_reg.div_frag) * (hw->clk_conf.sclk_div_num + 1));
}
/**

View File

@ -335,7 +335,7 @@ void gpio_hal_intr_disable(gpio_hal_context_t *hal, gpio_num_t gpio_num);
*
* @param hal Context of the HAL layer
* */
#define gpio_hal_force_unhold_all(hal) gpio_ll_force_unhold_all((hal)->dev)
#define gpio_hal_force_unhold_all() gpio_ll_force_unhold_all()
#endif
#if SOC_GPIO_SUPPORT_SLP_SWITCH
@ -435,8 +435,37 @@ void gpio_hal_sleep_pupd_config_apply(gpio_hal_context_t *hal, gpio_num_t gpio_n
* @param gpio_num GPIO number.
*/
void gpio_hal_sleep_pupd_config_unapply(gpio_hal_context_t *hal, gpio_num_t gpio_num);
#endif
#endif
#endif // CONFIG_GPIO_ESP32_SUPPORT_SWITCH_SLP_PULL
#endif //SOC_GPIO_SUPPORT_SLP_SWITCH
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
/**
* @brief Enable GPIO deep-sleep wake-up function.
*
* @param hal Context of the HAL layer
* @param gpio_num GPIO number.
* @param intr_type GPIO wake-up type. Only GPIO_INTR_LOW_LEVEL or GPIO_INTR_HIGH_LEVEL can be used.
*/
#define gpio_hal_deepsleep_wakeup_enable(hal, gpio_num, intr_type) gpio_ll_deepsleep_wakeup_enable((hal)->dev, gpio_num, intr_type)
/**
* @brief Disable GPIO deep-sleep wake-up function.
*
* @param hal Context of the HAL layer
* @param gpio_num GPIO number
*/
#define gpio_hal_deepsleep_wakeup_disable(hal, gpio_num) gpio_ll_deepsleep_wakeup_disable((hal)->dev, gpio_num)
/**
* @brief Judge if the gpio is valid for waking up chip from deep-sleep
*
* @param gpio_num GPIO number
*/
#define gpio_hal_is_valid_deepsleep_wakeup_gpio(gpio_num) (gpio_num <= GPIO_NUM_5)
#endif //SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
#ifdef __cplusplus
}
#endif

View File

@ -292,7 +292,7 @@ typedef enum {
GPIO_NUM_16 = 16, /*!< GPIO16, input and output */
GPIO_NUM_17 = 17, /*!< GPIO17, input and output */
GPIO_NUM_18 = 18, /*!< GPIO18, input and output */
// TODO: ESP32C3 IDF-2463
GPIO_NUM_19 = 19, /*!< GPIO19, input and output */
GPIO_NUM_20 = 20, /*!< GPIO20, input and output */
GPIO_NUM_21 = 21, /*!< GPIO21, input and output */
GPIO_NUM_22 = 22, /*!< GPIO22, input and output */

View File

@ -15,17 +15,33 @@
#pragma once
#include "hal/gpio_types.h"
#include "hal/rtc_io_ll.h"
#include "hal/rtc_cntl_ll.h"
#if !CONFIG_IDF_TARGET_ESP32C3
#include "hal/rtc_io_ll.h"
#endif
#define RTC_HAL_DMA_LINK_NODE_SIZE (16)
#if SOC_PM_SUPPORT_EXT_WAKEUP
#define rtc_hal_ext1_get_wakeup_pins() rtc_cntl_ll_ext1_get_wakeup_pins()
#define rtc_hal_ext1_set_wakeup_pins(mask, mode) rtc_cntl_ll_ext1_set_wakeup_pins(mask, mode)
#define rtc_hal_ext1_clear_wakeup_pins() rtc_cntl_ll_ext1_clear_wakeup_pins()
#endif
#if SOC_GPIO_SUPPORT_DEEPSLEEP_WAKEUP
#define rtc_hal_gpio_get_wakeup_pins() rtc_cntl_ll_gpio_get_wakeup_pins()
#define rtc_hal_gpio_clear_wakeup_pins() rtc_cntl_ll_gpio_clear_wakeup_pins()
#define rtc_hal_gpio_set_wakeup_pins() rtc_cntl_ll_gpio_set_wakeup_pins()
#endif
#define rtc_hal_set_wakeup_timer(ticks) rtc_cntl_ll_set_wakeup_timer(ticks)
void * rtc_cntl_hal_dma_link_init(void *elem, void *buff, int size, void *next);

View File

@ -22,14 +22,17 @@
#pragma once
#include <esp_err.h>
#if !CONFIG_IDF_TARGET_ESP32C3
#include "soc/soc_caps.h"
#include "hal/rtc_io_ll.h"
#include <esp_err.h>
#endif
#ifdef __cplusplus
extern "C" {
#endif
#if SOC_RTCIO_INPUT_OUTPUT_SUPPORTED
/**
* Select the rtcio function.
*
@ -39,8 +42,6 @@ extern "C" {
*/
#define rtcio_hal_function_select(rtcio_num, func) rtcio_ll_function_select(rtcio_num, func)
#if SOC_RTCIO_INPUT_OUTPUT_SUPPORTED
/**
* Enable rtcio output.
*
@ -210,7 +211,6 @@ void rtcio_hal_set_direction_in_sleep(int rtcio_num, rtc_gpio_mode_t mode);
* @param rtcio_num The index of rtcio. 0 ~ SOC_RTCIO_PIN_COUNT.
*/
#define rtcio_hal_unhold_all() rtcio_ll_force_unhold_all()
#endif // SOC_RTCIO_HOLD_SUPPORTED
#if SOC_RTCIO_WAKE_SUPPORTED

View File

@ -72,6 +72,7 @@ typedef struct {
typedef struct {
spi_dma_dev_t *dma_in; ///< Input DMA(DMA -> RAM) peripheral register address
spi_dma_dev_t *dma_out; ///< Output DMA(RAM -> DMA) peripheral register address
bool dma_enabled; ///< Whether the DMA is enabled, do not update after initialization
lldesc_t *dmadesc_tx; /**< Array of DMA descriptor used by the TX DMA.
* The amount should be larger than dmadesc_n. The driver should ensure that
* the data to be sent is shorter than the descriptors can hold.
@ -80,8 +81,10 @@ typedef struct {
* The amount should be larger than dmadesc_n. The driver should ensure that
* the data to be sent is shorter than the descriptors can hold.
*/
uint32_t tx_dma_chan; ///< TX DMA channel
uint32_t rx_dma_chan; ///< RX DMA channel
int dmadesc_n; ///< The amount of descriptors of both ``dmadesc_tx`` and ``dmadesc_rx`` that the HAL can use.
} spi_hal_dma_config_t;
} spi_hal_config_t;
/**
* Transaction configuration structure, this should be assigned by driver each time.
@ -104,12 +107,24 @@ typedef struct {
* Context that should be maintained by both the driver and the HAL.
*/
typedef struct {
/* These two need to be malloced by the driver first */
lldesc_t *dmadesc_tx; /**< Array of DMA descriptor used by the TX DMA.
* The amount should be larger than dmadesc_n. The driver should ensure that
* the data to be sent is shorter than the descriptors can hold.
*/
lldesc_t *dmadesc_rx; /**< Array of DMA descriptor used by the RX DMA.
* The amount should be larger than dmadesc_n. The driver should ensure that
* the data to be sent is shorter than the descriptors can hold.
*/
/* Configured by driver at initialization, don't touch */
spi_dev_t *hw; ///< Beginning address of the peripheral registers.
spi_dma_dev_t *dma_in; ///< Address of the DMA peripheral registers which stores the data received from a peripheral into RAM (DMA -> RAM).
spi_dma_dev_t *dma_out; ///< Address of the DMA peripheral registers which transmits the data from RAM to a peripheral (RAM -> DMA).
bool dma_enabled; ///< Whether the DMA is enabled, do not update after initialization
spi_hal_dma_config_t dma_config; ///< DMA configuration
uint32_t tx_dma_chan; ///< TX DMA channel
uint32_t rx_dma_chan; ///< RX DMA channel
int dmadesc_n; ///< The amount of descriptors of both ``dmadesc_tx`` and ``dmadesc_rx`` that the HAL can use.
/* Internal parameters, don't touch */
spi_hal_trans_config_t trans_config; ///< Transaction configuration
@ -144,10 +159,11 @@ typedef struct {
/**
* Init the peripheral and the context.
*
* @param hal Context of the HAL layer.
* @param host_id Index of the SPI peripheral. 0 for SPI1, 1 for HSPI (SPI2) and 2 for VSPI (SPI3).
* @param hal Context of the HAL layer.
* @param host_id Index of the SPI peripheral. 0 for SPI1, 1 for HSPI (SPI2) and 2 for VSPI (SPI3).
* @param hal_config Configuration of the hal defined by the upper layer.
*/
void spi_hal_init(spi_hal_context_t *hal, uint32_t host_id, const spi_hal_dma_config_t *hal_dma_config);
void spi_hal_init(spi_hal_context_t *hal, uint32_t host_id, const spi_hal_config_t *hal_config);
/**
* Deinit the peripheral (and the context if needed).

View File

@ -55,7 +55,9 @@ typedef struct {
* The amount should be larger than dmadesc_n. The driver should ensure that
* the data to be sent is shorter than the descriptors can hold.
*/
int dmadesc_n; ///< The amount of descriptors of both ``dmadesc_tx`` and ``dmadesc_rx`` that the HAL can use.
int dmadesc_n; ///< The amount of descriptors of both ``dmadesc_tx`` and ``dmadesc_rx`` that the HAL can use.
uint32_t tx_dma_chan; ///< TX DMA channel
uint32_t rx_dma_chan; ///< RX DMA channel
/*
* configurations to be filled after ``spi_slave_hal_init``. Updated to

View File

@ -70,7 +70,9 @@ typedef struct {
uint32_t host_id; ///< Host ID of the spi peripheral
spi_dma_dev_t *dma_in; ///< Input DMA(DMA -> RAM) peripheral register address
spi_dma_dev_t *dma_out; ///< Output DMA(RAM -> DMA) peripheral register address
uint32_t dma_chan; ///< The dma channel used.
bool dma_enabled; ///< DMA enabled or not
uint32_t tx_dma_chan; ///< TX DMA channel used.
uint32_t rx_dma_chan; ///< RX DMA channel used.
bool append_mode; ///< True for DMA append mode, false for segment mode
uint32_t spics_io_num; ///< CS GPIO pin for this device
uint8_t mode; ///< SPI mode (0-3)
@ -94,22 +96,28 @@ typedef struct {
spi_dev_t *dev; ///< Beginning address of the peripheral registers.
spi_dma_dev_t *dma_in; ///< Address of the DMA peripheral registers which stores the data received from a peripheral into RAM.
spi_dma_dev_t *dma_out; ///< Address of the DMA peripheral registers which transmits the data from RAM to a peripheral.
bool dma_enabled; ///< DMA enabled or not
uint32_t tx_dma_chan; ///< TX DMA channel used.
uint32_t rx_dma_chan; ///< RX DMA channel used.
bool append_mode; ///< True for DMA append mode, false for segment mode
uint32_t dma_desc_num; ///< Number of the available DMA descriptors. Calculated from ``bus_max_transfer_size``.
spi_slave_hd_hal_desc_append_t *tx_cur_desc; ///< Current TX DMA descriptor that could be linked (set up).
spi_slave_hd_hal_desc_append_t *tx_dma_head; ///< Head of the linked TX DMA descriptors which are not used by hardware
spi_slave_hd_hal_desc_append_t *tx_dma_tail; ///< Tail of the linked TX DMA descriptors which are not used by hardware
spi_slave_hd_hal_desc_append_t tx_dummy_head; ///< Dummy descriptor for ``tx_dma_head`` to start
uint32_t tx_used_desc_cnt; ///< Number of the TX descriptors that have been setup
uint32_t tx_recycled_desc_cnt; ///< Number of the TX descriptors that could be recycled
spi_slave_hd_hal_desc_append_t *rx_cur_desc; ///< Current RX DMA descriptor that could be linked (set up).
spi_slave_hd_hal_desc_append_t *rx_dma_head; ///< Head of the linked RX DMA descriptors which are not used by hardware
spi_slave_hd_hal_desc_append_t *rx_dma_tail; ///< Tail of the linked RX DMA descriptors which are not used by hardware
spi_slave_hd_hal_desc_append_t rx_dummy_head; ///< Dummy descriptor for ``rx_dma_head`` to start
uint32_t rx_used_desc_cnt; ///< Number of the RX descriptors that have been setup
uint32_t rx_recycled_desc_cnt; ///< Number of the RX descriptors that could be recycled
/* Internal status used by the HAL implementation, initialized as 0. */
uint32_t intr_not_triggered;
bool tx_dma_started;
bool rx_dma_started;
} spi_slave_hd_hal_context_t;
/**
@ -258,6 +266,10 @@ int spi_slave_hd_hal_get_last_addr(spi_slave_hd_hal_context_t *hal);
/**
* @brief Return the finished TX transaction
*
* @note This API is based on this assumption: the hardware behaviour of current transaction completion is only modified by the its own caller layer.
* This means if some other code changed the hardware behaviour (e.g. clear intr raw bit), or the caller call this API without noticing the HW behaviour,
* this API will go wrong.
*
* @param hal Context of the HAL layer
* @param out_trans Pointer to the caller-defined transaction
* @return 1: Transaction is finished; 0: Transaction is not finished
@ -267,6 +279,10 @@ bool spi_slave_hd_hal_get_tx_finished_trans(spi_slave_hd_hal_context_t *hal, voi
/**
* @brief Return the finished RX transaction
*
* @note This API is based on this assumption: the hardware behaviour of current transaction completion is only modified by the its own caller layer.
* This means if some other code changed the hardware behaviour (e.g. clear intr raw bit), or the caller call this API without noticing the HW behaviour,
* this API will go wrong.
*
* @param hal Context of the HAL layer
* @param out_trans Pointer to the caller-defined transaction
* @param out_len Actual number of bytes of received data

View File

@ -23,7 +23,7 @@
* @brief Enum with the three SPI peripherals that are software-accessible in it
*/
typedef enum {
// SPI_HOST (SPI1_HOST) is not supported by the SPI Master and SPI Slave driver on ESP32-S2
//SPI1 can be used as GPSPI only on ESP32
SPI1_HOST=0, ///< SPI1
SPI2_HOST=1, ///< SPI2
SPI3_HOST=2, ///< SPI3

View File

@ -22,12 +22,12 @@
#include "soc/gdma_struct.h"
#include "hal/gdma_ll.h"
#define spi_dma_ll_rx_enable_burst_data(dev, enable) gdma_ll_rx_enable_data_burst(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_data(dev, enable) gdma_ll_tx_enable_data_burst(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_enable_burst_desc(dev, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_desc(dev, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_enable_out_auto_wrback(dev, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_set_out_eof_generation(dev, enable) gdma_ll_tx_set_eof_mode(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_enable_burst_data(dev, chan, enable) gdma_ll_rx_enable_data_burst(&GDMA, chan, enable);
#define spi_dma_ll_tx_enable_burst_data(dev, chan, enable) gdma_ll_tx_enable_data_burst(&GDMA, chan, enable);
#define spi_dma_ll_rx_enable_burst_desc(dev, chan, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, chan, enable);
#define spi_dma_ll_tx_enable_burst_desc(dev, chan, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, chan, enable);
#define spi_dma_ll_enable_out_auto_wrback(dev, chan, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, chan, enable);
#define spi_dma_ll_set_out_eof_generation(dev, chan, enable) gdma_ll_tx_set_eof_mode(&GDMA, chan, enable);
#endif
static const char SPI_HAL_TAG[] = "spi_hal";
@ -39,19 +39,25 @@ static const char SPI_HAL_TAG[] = "spi_hal";
static void s_spi_hal_dma_init_config(const spi_hal_context_t *hal)
{
spi_dma_ll_rx_enable_burst_data(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_data(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, hal->tx_dma_chan, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, hal->tx_dma_chan ,1);
}
void spi_hal_init(spi_hal_context_t *hal, uint32_t host_id, const spi_hal_dma_config_t *dma_config)
void spi_hal_init(spi_hal_context_t *hal, uint32_t host_id, const spi_hal_config_t *config)
{
memset(hal, 0, sizeof(spi_hal_context_t));
spi_dev_t *hw = SPI_LL_GET_HW(host_id);
hal->hw = hw;
hal->dma_in = dma_config->dma_in;
hal->dma_out = dma_config->dma_out;
hal->dma_in = config->dma_in;
hal->dma_out = config->dma_out;
hal->dma_enabled = config->dma_enabled;
hal->dmadesc_tx = config->dmadesc_tx;
hal->dmadesc_rx = config->dmadesc_rx;
hal->tx_dma_chan = config->tx_dma_chan;
hal->rx_dma_chan = config->rx_dma_chan;
hal->dmadesc_n = config->dmadesc_n;
spi_ll_master_init(hw);
s_spi_hal_dma_init_config(hal);
@ -63,9 +69,6 @@ void spi_hal_init(spi_hal_context_t *hal, uint32_t host_id, const spi_hal_dma_co
spi_ll_enable_int(hw);
spi_ll_set_int_stat(hw);
spi_ll_set_mosi_delay(hw, 0, 0);
//Save the dma configuration in ``spi_hal_context_t``
memcpy(&hal->dma_config, dma_config, sizeof(spi_hal_dma_config_t));
}
void spi_hal_deinit(spi_hal_context_t *hal)

View File

@ -23,15 +23,15 @@
#include "soc/gdma_struct.h"
#include "hal/gdma_ll.h"
#define spi_dma_ll_rx_reset(dev) gdma_ll_rx_reset_channel(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL)
#define spi_dma_ll_tx_reset(dev) gdma_ll_tx_reset_channel(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL);
#define spi_dma_ll_rx_start(dev, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL);\
#define spi_dma_ll_rx_reset(dev, chan) gdma_ll_rx_reset_channel(&GDMA, chan)
#define spi_dma_ll_tx_reset(dev, chan) gdma_ll_tx_reset_channel(&GDMA, chan);
#define spi_dma_ll_rx_start(dev, chan, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, chan);\
} while (0)
#define spi_dma_ll_tx_start(dev, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, SOC_GDMA_SPI2_DMA_CHANNEL);\
#define spi_dma_ll_tx_start(dev, chan, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, chan);\
} while (0)
#endif
@ -143,12 +143,12 @@ void spi_hal_prepare_data(spi_hal_context_t *hal, const spi_hal_dev_config_t *de
if (!hal->dma_enabled) {
//No need to setup anything; we'll copy the result out of the work registers directly later.
} else {
lldesc_setup_link(hal->dma_config.dmadesc_rx, trans->rcv_buffer, ((trans->rx_bitlen + 7) / 8), true);
lldesc_setup_link(hal->dmadesc_rx, trans->rcv_buffer, ((trans->rx_bitlen + 7) / 8), true);
spi_dma_ll_rx_reset(hal->dma_in);
spi_dma_ll_rx_reset(hal->dma_in, hal->rx_dma_chan);
spi_ll_dma_rx_fifo_reset(hal->dma_in);
spi_ll_dma_rx_enable(hal->hw, 1);
spi_dma_ll_rx_start(hal->dma_in, hal->dma_config.dmadesc_rx);
spi_dma_ll_rx_start(hal->dma_in, hal->rx_dma_chan, hal->dmadesc_rx);
}
}
@ -157,7 +157,7 @@ void spi_hal_prepare_data(spi_hal_context_t *hal, const spi_hal_dev_config_t *de
//DMA temporary workaround: let RX DMA work somehow to avoid the issue in ESP32 v0/v1 silicon
if (hal->dma_enabled && !dev->half_duplex) {
spi_ll_dma_rx_enable(hal->hw, 1);
spi_dma_ll_rx_start(hal->dma_in, 0);
spi_dma_ll_rx_start(hal->dma_in, hal->rx_dma_chan, 0);
}
}
#endif
@ -167,12 +167,12 @@ void spi_hal_prepare_data(spi_hal_context_t *hal, const spi_hal_dev_config_t *de
//Need to copy data to registers manually
spi_ll_write_buffer(hw, trans->send_buffer, trans->tx_bitlen);
} else {
lldesc_setup_link(hal->dma_config.dmadesc_tx, trans->send_buffer, (trans->tx_bitlen + 7) / 8, false);
lldesc_setup_link(hal->dmadesc_tx, trans->send_buffer, (trans->tx_bitlen + 7) / 8, false);
spi_dma_ll_tx_reset(hal->dma_out);
spi_dma_ll_tx_reset(hal->dma_out, hal->tx_dma_chan);
spi_ll_dma_tx_fifo_reset(hal->dma_in);
spi_ll_dma_tx_enable(hal->hw, 1);
spi_dma_ll_tx_start(hal->dma_out, hal->dma_config.dmadesc_tx);
spi_dma_ll_tx_start(hal->dma_out, hal->tx_dma_chan, hal->dmadesc_tx);
}
}

View File

@ -7,20 +7,20 @@
#include "soc/gdma_struct.h"
#include "hal/gdma_ll.h"
#define spi_dma_ll_rx_enable_burst_data(dev, enable) gdma_ll_rx_enable_data_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_data(dev, enable) gdma_ll_tx_enable_data_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_enable_burst_desc(dev, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_desc(dev, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_enable_out_auto_wrback(dev, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_set_out_eof_generation(dev, enable) gdma_ll_tx_set_eof_mode(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_enable_burst_data(dev, chan, enable) gdma_ll_rx_enable_data_burst(&GDMA, chan, enable);
#define spi_dma_ll_tx_enable_burst_data(dev, chan, enable) gdma_ll_tx_enable_data_burst(&GDMA, chan, enable);
#define spi_dma_ll_rx_enable_burst_desc(dev, chan, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, chan, enable);
#define spi_dma_ll_tx_enable_burst_desc(dev, chan, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, chan, enable);
#define spi_dma_ll_enable_out_auto_wrback(dev, chan, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, chan, enable);
#define spi_dma_ll_set_out_eof_generation(dev, chan, enable) gdma_ll_tx_set_eof_mode(&GDMA, chan, enable);
#endif
static void s_spi_slave_hal_dma_init_config(const spi_slave_hal_context_t *hal)
{
spi_dma_ll_rx_enable_burst_data(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_data(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, hal->tx_dma_chan, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, hal->tx_dma_chan, 1);
}
void spi_slave_hal_init(spi_slave_hal_context_t *hal, const spi_slave_hal_config_t *hal_config)

View File

@ -7,15 +7,15 @@
#include "soc/gdma_struct.h"
#include "hal/gdma_ll.h"
#define spi_dma_ll_rx_reset(dev) gdma_ll_rx_reset_channel(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL)
#define spi_dma_ll_tx_reset(dev) gdma_ll_tx_reset_channel(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);
#define spi_dma_ll_rx_start(dev, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);\
#define spi_dma_ll_rx_reset(dev, chan) gdma_ll_rx_reset_channel(&GDMA, chan)
#define spi_dma_ll_tx_reset(dev, chan) gdma_ll_tx_reset_channel(&GDMA, chan);
#define spi_dma_ll_rx_start(dev, chan, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, chan);\
} while (0)
#define spi_dma_ll_tx_start(dev, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);\
#define spi_dma_ll_tx_start(dev, chan, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, chan);\
} while (0)
#endif
@ -39,24 +39,24 @@ void spi_slave_hal_prepare_data(const spi_slave_hal_context_t *hal)
lldesc_setup_link(hal->dmadesc_rx, hal->rx_buffer, ((hal->bitlen + 7) / 8), true);
//reset dma inlink, this should be reset before spi related reset
spi_dma_ll_rx_reset(hal->dma_in);
spi_dma_ll_rx_reset(hal->dma_in, hal->rx_dma_chan);
spi_ll_dma_rx_fifo_reset(hal->dma_in);
spi_ll_slave_reset(hal->hw);
spi_ll_infifo_full_clr(hal->hw);
spi_ll_dma_rx_enable(hal->hw, 1);
spi_dma_ll_rx_start(hal->dma_in, &hal->dmadesc_rx[0]);
spi_dma_ll_rx_start(hal->dma_in, hal->rx_dma_chan, &hal->dmadesc_rx[0]);
}
if (hal->tx_buffer) {
lldesc_setup_link(hal->dmadesc_tx, hal->tx_buffer, (hal->bitlen + 7) / 8, false);
//reset dma outlink, this should be reset before spi related reset
spi_dma_ll_tx_reset(hal->dma_out);
spi_dma_ll_tx_reset(hal->dma_out, hal->tx_dma_chan);
spi_ll_dma_tx_fifo_reset(hal->dma_out);
spi_ll_slave_reset(hal->hw);
spi_ll_outfifo_empty_clr(hal->hw);
spi_ll_dma_tx_enable(hal->hw, 1);
spi_dma_ll_tx_start(hal->dma_out, (&hal->dmadesc_tx[0]));
spi_dma_ll_tx_start(hal->dma_out, hal->tx_dma_chan, (&hal->dmadesc_tx[0]));
}
} else {
//No DMA. Turn off SPI and copy data to transmit buffers.

View File

@ -29,31 +29,34 @@
#include "soc/gdma_struct.h"
#include "hal/gdma_ll.h"
#define spi_dma_ll_rx_reset(dev) gdma_ll_rx_reset_channel(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL)
#define spi_dma_ll_tx_reset(dev) gdma_ll_tx_reset_channel(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);
#define spi_dma_ll_rx_enable_burst_data(dev, enable) gdma_ll_rx_enable_data_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_data(dev, enable) gdma_ll_tx_enable_data_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_enable_burst_desc(dev, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_tx_enable_burst_desc(dev, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_enable_out_auto_wrback(dev, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_set_out_eof_generation(dev, enable) gdma_ll_tx_set_eof_mode(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, enable);
#define spi_dma_ll_rx_start(dev, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);\
#define spi_dma_ll_rx_reset(dev, chan) gdma_ll_rx_reset_channel(&GDMA, chan)
#define spi_dma_ll_tx_reset(dev, chan) gdma_ll_tx_reset_channel(&GDMA, chan)
#define spi_dma_ll_rx_enable_burst_data(dev, chan, enable) gdma_ll_rx_enable_data_burst(&GDMA, chan, enable)
#define spi_dma_ll_tx_enable_burst_data(dev, chan, enable) gdma_ll_tx_enable_data_burst(&GDMA, chan, enable)
#define spi_dma_ll_rx_enable_burst_desc(dev, chan, enable) gdma_ll_rx_enable_descriptor_burst(&GDMA, chan, enable)
#define spi_dma_ll_tx_enable_burst_desc(dev, chan, enable) gdma_ll_tx_enable_descriptor_burst(&GDMA, chan, enable)
#define spi_dma_ll_enable_out_auto_wrback(dev, chan, enable) gdma_ll_tx_enable_auto_write_back(&GDMA, chan, enable)
#define spi_dma_ll_set_out_eof_generation(dev, chan, enable) gdma_ll_tx_set_eof_mode(&GDMA, chan, enable)
#define spi_dma_ll_get_out_eof_desc_addr(dev, chan) gdma_ll_tx_get_eof_desc_addr(&GDMA, chan)
#define spi_dma_ll_get_in_suc_eof_desc_addr(dev, chan) gdma_ll_rx_get_success_eof_desc_addr(&GDMA, chan)
#define spi_dma_ll_rx_start(dev, chan, addr) do {\
gdma_ll_rx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_rx_start(&GDMA, chan);\
} while (0)
#define spi_dma_ll_tx_start(dev, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, SOC_GDMA_SPI3_DMA_CHANNEL);\
#define spi_dma_ll_tx_start(dev, chan, addr) do {\
gdma_ll_tx_set_desc_addr(&GDMA, chan, (uint32_t)addr);\
gdma_ll_tx_start(&GDMA, chan);\
} while (0)
#endif
static void s_spi_slave_hd_hal_dma_init_config(const spi_slave_hd_hal_context_t *hal)
{
spi_dma_ll_rx_enable_burst_data(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, 1);
spi_dma_ll_enable_out_auto_wrback(hal->dma_out, 1);
spi_dma_ll_rx_enable_burst_data(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_data(hal->dma_out, hal->tx_dma_chan, 1);
spi_dma_ll_rx_enable_burst_desc(hal->dma_in, hal->rx_dma_chan, 1);
spi_dma_ll_tx_enable_burst_desc(hal->dma_out, hal->tx_dma_chan, 1);
spi_dma_ll_enable_out_auto_wrback(hal->dma_out, hal->tx_dma_chan, 1);
spi_dma_ll_set_out_eof_generation(hal->dma_out, hal->tx_dma_chan, 1);
}
void spi_slave_hd_hal_init(spi_slave_hd_hal_context_t *hal, const spi_slave_hd_hal_config_t *hal_config)
@ -62,9 +65,16 @@ void spi_slave_hd_hal_init(spi_slave_hd_hal_context_t *hal, const spi_slave_hd_h
hal->dev = hw;
hal->dma_in = hal_config->dma_in;
hal->dma_out = hal_config->dma_out;
hal->dma_enabled = hal_config->dma_enabled;
hal->tx_dma_chan = hal_config->tx_dma_chan;
hal->rx_dma_chan = hal_config->rx_dma_chan;
hal->append_mode = hal_config->append_mode;
hal->rx_cur_desc = hal->dmadesc_rx;
hal->tx_cur_desc = hal->dmadesc_tx;
STAILQ_NEXT(&hal->tx_dummy_head.desc, qe) = &hal->dmadesc_tx->desc;
hal->tx_dma_head = &hal->tx_dummy_head;
STAILQ_NEXT(&hal->rx_dummy_head.desc, qe) = &hal->dmadesc_rx->desc;
hal->rx_dma_head = &hal->rx_dummy_head;
//Configure slave
s_spi_slave_hd_hal_dma_init_config(hal);
@ -75,7 +85,7 @@ void spi_slave_hd_hal_init(spi_slave_hd_hal_context_t *hal, const spi_slave_hd_h
spi_ll_set_dummy(hw, hal_config->dummy_bits);
spi_ll_set_rx_lsbfirst(hw, hal_config->rx_lsbfirst);
spi_ll_set_tx_lsbfirst(hw, hal_config->tx_lsbfirst);
spi_ll_slave_set_mode(hw, hal_config->mode, (hal_config->dma_chan != 0));
spi_ll_slave_set_mode(hw, hal_config->mode, (hal_config->dma_enabled));
spi_ll_disable_intr(hw, UINT32_MAX);
spi_ll_clear_intr(hw, UINT32_MAX);
@ -134,14 +144,14 @@ void spi_slave_hd_hal_rxdma(spi_slave_hd_hal_context_t *hal, uint8_t *out_buf, s
lldesc_setup_link(&hal->dmadesc_rx->desc, out_buf, len, true);
spi_ll_dma_rx_fifo_reset(hal->dev);
spi_dma_ll_rx_reset(hal->dma_in);
spi_dma_ll_rx_reset(hal->dma_in, hal->rx_dma_chan);
spi_ll_slave_reset(hal->dev);
spi_ll_infifo_full_clr(hal->dev);
spi_ll_clear_intr(hal->dev, SPI_LL_INTR_CMD7);
spi_ll_slave_set_rx_bitlen(hal->dev, len * 8);
spi_ll_dma_rx_enable(hal->dev, 1);
spi_dma_ll_rx_start(hal->dma_in, &hal->dmadesc_rx->desc);
spi_dma_ll_rx_start(hal->dma_in, hal->rx_dma_chan, &hal->dmadesc_rx->desc);
}
void spi_slave_hd_hal_txdma(spi_slave_hd_hal_context_t *hal, uint8_t *data, size_t len)
@ -149,13 +159,13 @@ void spi_slave_hd_hal_txdma(spi_slave_hd_hal_context_t *hal, uint8_t *data, size
lldesc_setup_link(&hal->dmadesc_tx->desc, data, len, false);
spi_ll_dma_tx_fifo_reset(hal->dev);
spi_dma_ll_tx_reset(hal->dma_out);
spi_dma_ll_tx_reset(hal->dma_out, hal->tx_dma_chan);
spi_ll_slave_reset(hal->dev);
spi_ll_outfifo_empty_clr(hal->dev);
spi_ll_clear_intr(hal->dev, SPI_LL_INTR_CMD8);
spi_ll_dma_tx_enable(hal->dev, 1);
spi_dma_ll_tx_start(hal->dma_out, &hal->dmadesc_tx->desc);
spi_dma_ll_tx_start(hal->dma_out, hal->tx_dma_chan, &hal->dmadesc_tx->desc);
}
static spi_ll_intr_t get_event_intr(spi_slave_hd_hal_context_t *hal, spi_event_t ev)
@ -258,27 +268,27 @@ int spi_slave_hd_hal_rxdma_seg_get_len(spi_slave_hd_hal_context_t *hal)
bool spi_slave_hd_hal_get_tx_finished_trans(spi_slave_hd_hal_context_t *hal, void **out_trans)
{
if (!hal->tx_dma_head || hal->tx_dma_head->desc.owner) {
if ((uint32_t)&hal->tx_dma_head->desc == spi_dma_ll_get_out_eof_desc_addr(hal->dma_out, hal->tx_dma_chan)) {
return false;
}
hal->tx_dma_head = (spi_slave_hd_hal_desc_append_t *)STAILQ_NEXT(&hal->tx_dma_head->desc, qe);
*out_trans = hal->tx_dma_head->arg;
hal->tx_recycled_desc_cnt++;
hal->tx_dma_head = (spi_slave_hd_hal_desc_append_t *)STAILQ_NEXT(&hal->tx_dma_head->desc, qe);
return true;
}
bool spi_slave_hd_hal_get_rx_finished_trans(spi_slave_hd_hal_context_t *hal, void **out_trans, size_t *out_len)
{
if (!hal->rx_dma_head || hal->rx_dma_head->desc.owner) {
if ((uint32_t)&hal->rx_dma_head->desc == spi_dma_ll_get_in_suc_eof_desc_addr(hal->dma_in, hal->rx_dma_chan)) {
return false;
}
hal->rx_dma_head = (spi_slave_hd_hal_desc_append_t *)STAILQ_NEXT(&hal->rx_dma_head->desc, qe);
*out_trans = hal->rx_dma_head->arg;
*out_len = hal->rx_dma_head->desc.length;
hal->rx_recycled_desc_cnt++;
hal->rx_dma_head = (spi_slave_hd_hal_desc_append_t *)STAILQ_NEXT(&hal->rx_dma_head->desc, qe);
return true;
}
@ -328,23 +338,21 @@ esp_err_t spi_slave_hd_hal_txdma_append(spi_slave_hd_hal_context_t *hal, uint8_t
spi_slave_hd_hal_link_append_desc(hal->tx_cur_desc, data, len, false, arg);
if (!hal->tx_dma_head) {
//start a new link
hal->tx_dma_head = hal->tx_cur_desc;
if (!hal->tx_dma_started) {
hal->tx_dma_started = true;
//start a link
hal->tx_dma_tail = hal->tx_cur_desc;
spi_dma_ll_tx_reset(hal->dma_out);
spi_ll_outfifo_empty_clr(hal->dev);
spi_ll_clear_intr(hal->dev, SPI_LL_INTR_OUT_EOF);
spi_ll_dma_tx_fifo_reset(hal->dma_out);
spi_ll_outfifo_empty_clr(hal->dev);
spi_dma_ll_tx_reset(hal->dma_out, hal->tx_dma_chan);
spi_ll_dma_tx_enable(hal->dev, 1);
spi_dma_ll_tx_start(hal->dma_out, &hal->tx_dma_head->desc);
spi_dma_ll_tx_start(hal->dma_out, hal->tx_dma_chan, &hal->tx_cur_desc->desc);
} else {
//there is already a link
//there is already a consecutive link
STAILQ_NEXT(&hal->tx_dma_tail->desc, qe) = &hal->tx_cur_desc->desc;
hal->tx_dma_tail = hal->tx_cur_desc;
spi_dma_ll_tx_restart(hal->dma_out);
spi_dma_ll_tx_restart(hal->dma_out, hal->tx_dma_chan);
}
//Move the current descriptor pointer according to the number of the linked descriptors
@ -371,23 +379,21 @@ esp_err_t spi_slave_hd_hal_rxdma_append(spi_slave_hd_hal_context_t *hal, uint8_t
spi_slave_hd_hal_link_append_desc(hal->rx_cur_desc, data, len, false, arg);
if (!hal->rx_dma_head) {
//start a new link
hal->rx_dma_head = hal->rx_cur_desc;
if (!hal->rx_dma_started) {
hal->rx_dma_started = true;
//start a link
hal->rx_dma_tail = hal->rx_cur_desc;
spi_dma_ll_rx_reset(hal->dma_in);
spi_ll_infifo_full_clr(hal->dev);
spi_ll_clear_intr(hal->dev, SPI_LL_INTR_CMD7);
spi_dma_ll_rx_reset(hal->dma_in, hal->rx_dma_chan);
spi_ll_dma_rx_fifo_reset(hal->dma_in);
spi_ll_infifo_full_clr(hal->dev);
spi_ll_dma_rx_enable(hal->dev, 1);
spi_dma_ll_rx_start(hal->dma_in, &hal->rx_dma_head->desc);
spi_dma_ll_rx_start(hal->dma_in, hal->rx_dma_chan, &hal->rx_cur_desc->desc);
} else {
//there is already a link
//there is already a consecutive link
STAILQ_NEXT(&hal->rx_dma_tail->desc, qe) = &hal->rx_cur_desc->desc;
hal->rx_dma_tail = hal->rx_cur_desc;
spi_dma_ll_rx_restart(hal->dma_in);
spi_dma_ll_rx_restart(hal->dma_in, hal->rx_dma_chan);
}
//Move the current descriptor pointer according to the number of the linked descriptors

Some files were not shown because too many files have changed in this diff Show More