aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorWarner Losh <imp@FreeBSD.org>2020-12-04 21:34:04 +0000
committerWarner Losh <imp@FreeBSD.org>2020-12-04 21:34:04 +0000
commit730b1b4d1c7448a0bad3ad75f9ab5df2c64ffcf6 (patch)
treeb98ed20f484e49c7903eed96be6b2a79b8f82943
parent22bd0c9731d73167352019c0c49d454196d029dc (diff)
downloadsrc-730b1b4d1c7448a0bad3ad75f9ab5df2c64ffcf6.tar.gz
src-730b1b4d1c7448a0bad3ad75f9ab5df2c64ffcf6.zip
busdma: Annotate bus_dmamap_sync() with fence
Add an explicit thread fence release before returning from bus_dmamap_sync. This should be a no-op in practice, but makes explicit that all ordinary stores will be completed before subsequent reads/writes to ordinary device memory. On x86, normal memory ordering is strong enough to generally guarantee this. The fence keeps the optimizer (likely LTO) from reordering other calls around this. The other architectures already have calls, as appropriate, that are equivalent. Note: On x86, there is one exception to this rule. If you've mapped memory as write combining, then you will need to add a sfence or similar. Normally, though, busdma doesn't operate on such memory, and drivers that do already cope appropriately. Reviewed by: kib@, gallatin@, chuck@, mav@ Differential Revision: https://reviews.freebsd.org/D27448
Notes
Notes: svn path=/head/; revision=368351
-rw-r--r--sys/x86/x86/busdma_bounce.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/sys/x86/x86/busdma_bounce.c b/sys/x86/x86/busdma_bounce.c
index 338e985f6332..037416be0e8a 100644
--- a/sys/x86/x86/busdma_bounce.c
+++ b/sys/x86/x86/busdma_bounce.c
@@ -969,7 +969,7 @@ bounce_bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map,
bus_size_t datacount1, datacount2;
if (map == NULL || (bpage = STAILQ_FIRST(&map->bpages)) == NULL)
- return;
+ goto out;
/*
* Handle data bouncing. We might also want to add support for
@@ -1059,6 +1059,8 @@ next_r:
}
dmat->bounce_zone->total_bounced++;
}
+out:
+ atomic_thread_fence_rel();
}
static void