aboutsummaryrefslogtreecommitdiffstats
diff options
authorJason Gunthorpe <jgg@nvidia.com>2025-05-14 09:48:55 -0300
committerJoerg Roedel <jroedel@suse.de>2025-05-16 14:29:16 +0200
commit5e2ff240b31a37226260587e33707efc3c41e451 (patch)
treee7b529a34d92754c2d64cde87d52512b2d8e8b6b
parente436576b0231542f6f233279f0972989232575a8 (diff)
downloadlinux-core.tar.gz
iommu: Clear the freelist after iommu_put_pages_list()core
The commit below reworked how iommu_put_pages_list() worked to not do list_del() on every entry. This was done expecting all the callers to already re-init the list so doing a per-item deletion is not efficient. It was missed that fq_ring_free_locked() re-uses its list after calling iommu_put_pages_list() and so the leftover list reaches free'd struct pages and will crash or WARN/BUG/etc. Reinit the list to empty in fq_ring_free_locked() after calling iommu_put_pages_list(). Audit to see if any other callers of iommu_put_pages_list() need the list to be empty: - iommu_dma_free_fq_single() and iommu_dma_free_fq_percpu() immediately frees the memory - iommu_v1_map_pages(), v1_free_pgtable(), domain_exit(), riscv_iommu_map_pages() uses a stack variable which goes out of scope - intel_iommu_tlb_sync() uses a gather in a iotlb_sync() callback, the caller re-inits the gather Fixes: 13f43d7cf3e0 ("iommu/pages: Formalize the freelist API") Reported-by: Borah, Chaitanya Kumar <chaitanya.kumar.borah@intel.com> Closes: https://lore.kernel.org/r/SJ1PR11MB61292CE72D7BE06B8810021CB997A@SJ1PR11MB6129.namprd11.prod.outlook.com Tested-by: Borah, Chaitanya Kumar <chaitanya.kumar.borah@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/0-v1-7d4dfa6140f7+11f04-iommu_freelist_init_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
-rw-r--r--drivers/iommu/dma-iommu.c2
-rw-r--r--drivers/iommu/iommu-pages.c4
2 files changed, 5 insertions, 1 deletions
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 0af1ab36283cba..7d2b51a890c75a 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -154,6 +154,8 @@ static void fq_ring_free_locked(struct iommu_dma_cookie *cookie, struct iova_fq
fq->entries[idx].iova_pfn,
fq->entries[idx].pages);
+ fq->entries[idx].freelist =
+ IOMMU_PAGES_LIST_INIT(fq->entries[idx].freelist);
fq->head = (fq->head + 1) & fq->mod_mask;
}
}
diff --git a/drivers/iommu/iommu-pages.c b/drivers/iommu/iommu-pages.c
index 4cc77fddfeeb47..238c09e5166b4d 100644
--- a/drivers/iommu/iommu-pages.c
+++ b/drivers/iommu/iommu-pages.c
@@ -105,7 +105,9 @@ EXPORT_SYMBOL_GPL(iommu_free_pages);
* iommu_put_pages_list - free a list of pages.
* @list: The list of pages to be freed
*
- * Frees a list of pages allocated by iommu_alloc_pages_node_sz().
+ * Frees a list of pages allocated by iommu_alloc_pages_node_sz(). On return the
+ * passed list is invalid, the caller must use IOMMU_PAGES_LIST_INIT to reinit
+ * the list if it expects to use it again.
*/
void iommu_put_pages_list(struct iommu_pages_list *list)
{