Driver Basics¶
Driver Entry and Exit points¶
-
module_init¶
module_init (x)
driver initialization entry point
Parameters
xfunction to be run at kernel boot time or module insertion
Description
module_init() will either be called during do_initcalls() (if
builtin) or at module insertion time (if a module). There can only
be one per module.
-
module_exit¶
module_exit (x)
driver exit entry point
Parameters
xfunction to be run when driver is removed
Description
module_exit() will wrap the driver clean-up code
with cleanup_module() when used with rmmod when
the driver is a module. If the driver is statically
compiled into the kernel, module_exit() has no effect.
There can only be one per module.
-
struct klp_modinfo¶
ELF information preserved from the livepatch module
Definition:
struct klp_modinfo {
Elf_Ehdr hdr;
Elf_Shdr *sechdrs;
char *secstrings;
unsigned int symndx;
};
Members
hdrELF header
sechdrsSection header table
secstringsString table for the section headers
symndxThe symbol table section index
Parameters
struct module *modulethe module we should check for
Description
Only try to get a module reference count if the module is not being removed. This call will fail if the module is in the process of being removed.
Care must also be taken to ensure the module exists and is alive prior to usage of this call. This can be gauranteed through two means:
Direct protection: you know an earlier caller must have increased the module reference through __module_get(). This can typically be achieved by having another entity other than the module itself increment the module reference count.
Implied protection: there is an implied protection against module removal. An example of this is the implied protection used by kernfs / sysfs. The sysfs store / read file operations are guaranteed to exist through the use of kernfs’s active reference (see kernfs_active()) and a sysfs / kernfs file removal cannot happen unless the same file is not active. Therefore, if a sysfs file is being read or written to the module which created it must still exist. It is therefore safe to use
try_module_get()on module sysfs store / read ops.
One of the real values to try_module_get() is the module_is_live() check
which ensures that the caller of try_module_get() can yield to userspace
module removal requests and gracefully fail if the module is on its way out.
Returns true if the reference count was successfully incremented.
Parameters
struct module *modulethe module we should release a reference count for
Description
If you successfully bump a reference count to a module with try_module_get(),
when you are finished you must call module_put() to release that reference
count.
Driver device table¶
-
struct usb_device_id¶
identifies USB devices for probing and hotplugging
Definition:
struct usb_device_id {
__u16 match_flags;
__u16 idVendor;
__u16 idProduct;
__u16 bcdDevice_lo;
__u16 bcdDevice_hi;
__u8 bDeviceClass;
__u8 bDeviceSubClass;
__u8 bDeviceProtocol;
__u8 bInterfaceClass;
__u8 bInterfaceSubClass;
__u8 bInterfaceProtocol;
__u8 bInterfaceNumber;
kernel_ulong_t driver_info ;
};
Members
match_flagsBit mask controlling which of the other fields are used to match against new devices. Any field except for driver_info may be used, although some only make sense in conjunction with other fields. This is usually set by a USB_DEVICE_*() macro, which sets all other fields in this structure except for driver_info.
idVendorUSB vendor ID for a device; numbers are assigned by the USB forum to its members.
idProductVendor-assigned product ID.
bcdDevice_loLow end of range of vendor-assigned product version numbers. This is also used to identify individual product versions, for a range consisting of a single device.
bcdDevice_hiHigh end of version number range. The range of product versions is inclusive.
bDeviceClassClass of device; numbers are assigned by the USB forum. Products may choose to implement classes, or be vendor-specific. Device classes specify behavior of all the interfaces on a device.
bDeviceSubClassSubclass of device; associated with bDeviceClass.
bDeviceProtocolProtocol of device; associated with bDeviceClass.
bInterfaceClassClass of interface; numbers are assigned by the USB forum. Products may choose to implement classes, or be vendor-specific. Interface classes specify behavior only of a given interface; other interfaces may support other classes.
bInterfaceSubClassSubclass of interface; associated with bInterfaceClass.
bInterfaceProtocolProtocol of interface; associated with bInterfaceClass.
bInterfaceNumberNumber of interface; composite devices may use fixed interface numbers to differentiate between vendor-specific interfaces.
driver_infoHolds information used by the driver. Usually it holds a pointer to a descriptor understood by the driver, or perhaps device flags.
Description
In most cases, drivers will create a table of device IDs by using
USB_DEVICE(), or similar macros designed for that purpose.
They will then export it to userspace using MODULE_DEVICE_TABLE(),
and provide it to the USB core through their usb_driver structure.
See the usb_match_id() function for information about how matches are
performed. Briefly, you will normally use one of several macros to help
construct these entries. Each entry you provide will either identify
one or more specific products, or will identify a class of products
which have agreed to behave the same. You should put the more specific
matches towards the beginning of your table, so that driver_info can
record quirks of specific products.
-
ACPI_DEVICE_CLASS¶
ACPI_DEVICE_CLASS (_cls, _msk)
macro used to describe an ACPI device with the PCI-defined class-code information
Parameters
_clsthe class, subclass, prog-if triple for this device
_mskthe class mask for this device
Description
This macro is used to create a struct acpi_device_id that matches a specific PCI class. The .id and .driver_data fields will be left initialized with the default value.
-
struct mdio_device_id¶
identifies PHY devices on an MDIO/MII bus
Definition:
struct mdio_device_id {
__u32 phy_id;
__u32 phy_id_mask;
};
Members
phy_idThe result of (mdio_read(
MII_PHYSID1) << 16 | mdio_read(MII_PHYSID2)) & phy_id_mask for this PHY typephy_id_maskDefines the significant bits of phy_id. A value of 0 is used to terminate an array of
struct mdio_device_id.
-
struct amba_id¶
identifies a device on an AMBA bus
Definition:
struct amba_id {
unsigned int id;
unsigned int mask;
void *data;
};
Members
idThe significant bits if the hardware device ID
maskBitmask specifying which bits of the id field are significant when matching. A driver binds to a device when ((hardware device ID) & mask) == id.
dataPrivate data used by the driver.
-
struct mips_cdmm_device_id¶
identifies devices in MIPS CDMM bus
Definition:
struct mips_cdmm_device_id {
__u8 type;
};
Members
typeDevice type identifier.
-
struct mei_cl_device_id¶
MEI client device identifier
Definition:
struct mei_cl_device_id {
char name[MEI_CL_NAME_SIZE];
uuid_le uuid;
__u8 version;
kernel_ulong_t driver_info;
};
Members
namehelper name
uuidclient uuid
versionclient protocol version
driver_infoinformation used by the driver.
Description
identifies mei client device by uuid and name
-
struct rio_device_id¶
RIO device identifier
Definition:
struct rio_device_id {
__u16 did, vid;
__u16 asm_did, asm_vid;
};
Members
didRapidIO device ID
vidRapidIO vendor ID
asm_didRapidIO assembly device ID
asm_vidRapidIO assembly vendor ID
Description
Identifies a RapidIO device based on both the device/vendor IDs and the assembly device/vendor IDs.
-
struct fsl_mc_device_id¶
MC object device identifier
Definition:
struct fsl_mc_device_id {
__u16 vendor;
const char obj_type[16];
};
Members
vendorvendor ID
obj_typeMC object type
Description
Type of entries in the “device Id” table for MC object devices supported by a MC object device driver. The last entry of the table has vendor set to 0x0
-
struct tb_service_id¶
Thunderbolt service identifiers
Definition:
struct tb_service_id {
__u32 match_flags;
char protocol_key[8 + 1];
__u32 protocol_id;
__u32 protocol_version;
__u32 protocol_revision;
kernel_ulong_t driver_data;
};
Members
match_flagsFlags used to match the structure
protocol_keyProtocol key the service supports
protocol_idProtocol id the service supports
protocol_versionVersion of the protocol
protocol_revisionRevision of the protocol software
driver_dataDriver specific data
Description
Thunderbolt XDomain services are exposed as devices where each device carries the protocol information the service supports. Thunderbolt XDomain service drivers match against that information.
-
struct typec_device_id¶
USB Type-C alternate mode identifiers
Definition:
struct typec_device_id {
__u16 svid;
__u8 mode;
kernel_ulong_t driver_data;
};
Members
svidStandard or Vendor ID
modeMode index
driver_dataDriver specific data
-
struct tee_client_device_id¶
tee based device identifier
Definition:
struct tee_client_device_id {
uuid_t uuid;
};
Members
uuidFor TEE based client devices we use the device uuid as the identifier.
-
struct wmi_device_id¶
WMI device identifier
Definition:
struct wmi_device_id {
const char guid_string[UUID_STRING_LEN+1];
const void *context;
};
Members
guid_string36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba
contextpointer to driver specific data
-
struct mhi_device_id¶
MHI device identification
Definition:
struct mhi_device_id {
const char chan[MHI_NAME_SIZE];
kernel_ulong_t driver_data;
};
Members
chanMHI channel name
driver_datadriver data;
-
struct dfl_device_id¶
dfl device identifier
Definition:
struct dfl_device_id {
__u16 type;
__u16 feature_id;
kernel_ulong_t driver_data;
};
Members
typeDFL FIU type of the device. See enum dfl_id_type.
feature_idfeature identifier local to its DFL FIU type.
driver_datadriver specific data.
-
struct ishtp_device_id¶
ISHTP device identifier
Definition:
struct ishtp_device_id {
guid_t guid;
kernel_ulong_t driver_data;
};
Members
guidGUID of the device.
driver_datapointer to driver specific data
-
struct cdx_device_id¶
CDX device identifier
Definition:
struct cdx_device_id {
__u16 vendor;
__u16 device;
__u16 subvendor;
__u16 subdevice;
__u32 class;
__u32 class_mask;
__u32 override_only;
};
Members
vendorVendor ID
deviceDevice ID
subvendorSubsystem vendor ID (or CDX_ANY_ID)
subdeviceSubsystem device ID (or CDX_ANY_ID)
classDevice class Most drivers do not need to specify class/class_mask as vendor/device is normally sufficient.
class_maskLimit which sub-fields of the class field are compared.
override_onlyMatch only when dev->driver_override is this driver.
Description
Type of entries in the “device Id” table for CDX devices supported by a CDX device driver.
-
struct coreboot_device_id¶
Identifies a coreboot table entry
Definition:
struct coreboot_device_id {
__u32 tag;
kernel_ulong_t driver_data;
};
Members
tagtag ID
driver_datadriver specific data
Delaying and scheduling routines¶
-
struct prev_cputime¶
snapshot of system and user cputime
Definition:
struct prev_cputime {
#ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE;
u64 utime;
u64 stime;
raw_spinlock_t lock;
#endif;
};
Members
utimetime spent in user mode
stimetime spent in system mode
lockprotects the above two fields
Description
Stores previous user/system time values such that we can guarantee monotonicity.
-
int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)¶
set CPU affinity mask of a task
Parameters
struct task_struct *pthe task
const struct cpumask *new_maskCPU affinity mask
Return
zero if successful, or a negative error code
-
int task_nice(const struct task_struct *p)¶
return the nice value of a given task.
Parameters
const struct task_struct *pthe task in question.
Return
The nice value [ -20 ... 0 ... 19 ].
-
bool is_idle_task(const struct task_struct *p)¶
is the specified task an idle task?
Parameters
const struct task_struct *pthe task in question.
Return
1 if p is an idle task. 0 otherwise.
-
int wake_up_process(struct task_struct *p)¶
Wake up a specific process
Parameters
struct task_struct *pThe process to be woken up.
Description
Attempt to wake up the nominated process and move it to the set of runnable processes.
This function executes a full memory barrier before accessing the task state.
Return
1 if the process was woken up, 0 if it was already running.
-
void preempt_notifier_register(struct preempt_notifier *notifier)¶
tell me when current is being preempted & rescheduled
Parameters
struct preempt_notifier *notifiernotifier struct to register
-
void preempt_notifier_unregister(struct preempt_notifier *notifier)¶
no longer interested in preemption notifications
Parameters
struct preempt_notifier *notifiernotifier struct to unregister
Description
This is not safe to call from within a preemption notifier.
-
__visible void notrace preempt_schedule_notrace(void)¶
preempt_schedule called by tracing
Parameters
voidno arguments
Description
The tracing infrastructure uses preempt_enable_notrace to prevent recursion and tracing preempt enabling caused by the tracing infrastructure itself. But as tracing can happen in areas coming from userspace or just about to enter userspace, a preempt enable can occur before user_exit() is called. This will cause the scheduler to be called when the system is still in usermode.
To prevent this, the preempt_enable_notrace will use this function instead of preempt_schedule() to exit user context if needed before calling the scheduler.
-
int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p, struct cpumask *lowest_mask, bool (*fitness_fn)(struct task_struct *p, int cpu))¶
find the best (lowest-pri) CPU in the system
Parameters
struct cpupri *cpThe cpupri context
struct task_struct *pThe task
struct cpumask *lowest_maskA mask to fill in with selected CPUs (or NULL)
bool (*fitness_fn)(struct task_struct *p, int cpu)A pointer to a function to do custom checks whether the CPU fits a specific criteria so that we only return those CPUs.
Note
This function returns the recommended CPUs as calculated during the current invocation. By the time the call returns, the CPUs may have in fact changed priorities any number of times. While not ideal, it is not an issue of correctness since the normal rebalancer logic will correct any discrepancies created by racing against the uncertainty of the current priority configuration.
Return
(int)bool - CPUs were found
-
void cpupri_set(struct cpupri *cp, int cpu, int newpri)¶
update the CPU priority setting
Parameters
struct cpupri *cpThe cpupri context
int cpuThe target CPU
int newpriThe priority (INVALID,NORMAL,RT1-RT99,HIGHER) to assign to this CPU
Note
Assumes cpu_rq(cpu)->lock is locked
Return
(void)
-
int cpupri_init(struct cpupri *cp)¶
initialize the cpupri structure
Parameters
struct cpupri *cpThe cpupri context
Return
-ENOMEM on memory allocation failure.
-
void cpupri_cleanup(struct cpupri *cp)¶
clean up the cpupri structure
Parameters
struct cpupri *cpThe cpupri context
Parameters
struct cfs_rq *cfs_rqthe cfs_rq whose avg changed
Description
This function ‘ensures’: tg->load_avg := Sum tg->cfs_rq[]->avg.load. However, because tg->load_avg is a global value there are performance considerations.
In order to avoid having to look at the other cfs_rq’s, we use a differential update where we store the last value we propagated. This in turn allows skipping updates if the differential is ‘small’.
Updating tg’s load_avg is necessary before update_cfs_share().
Parameters
u64 nowcurrent time, as per cfs_rq_clock_pelt()
struct cfs_rq *cfs_rqcfs_rq to update
Description
The cfs_rq avg is the direct sum of all its entities (blocked and runnable) avg. The immediate corollary is that all (fair) tasks must be attached.
cfs_rq->avg is used for task_h_load() and update_cfs_share() for example.
Since both these conditions indicate a changed cfs_rq->avg.load we should
call update_tg_load_avg() when this function returns true.
Return
true if the load decayed or we removed load.
-
void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)¶
attach this entity to its cfs_rq load avg
Parameters
struct cfs_rq *cfs_rqcfs_rq to attach to
struct sched_entity *sesched_entity to attach
Description
Must call update_cfs_rq_load_avg() before this, since we rely on
cfs_rq->avg.last_update_time being current.
-
void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)¶
detach this entity from its cfs_rq load avg
Parameters
struct cfs_rq *cfs_rqcfs_rq to detach from
struct sched_entity *sesched_entity to detach
Description
Must call update_cfs_rq_load_avg() before this, since we rely on
cfs_rq->avg.last_update_time being current.
-
unsigned long cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)¶
Estimates the amount of CPU capacity used by CFS tasks.
Parameters
int cputhe CPU to get the utilization for
struct task_struct *ptask for which the CPU utilization should be predicted or NULL
int dst_cpuCPU p migrates to, -1 if p moves from cpu or p == NULL
int boost1 to enable boosting, otherwise 0
Description
The unit of the return value must be the same as the one of CPU capacity so that CPU utilization can be compared with CPU capacity.
CPU utilization is the sum of running time of runnable tasks plus the recent utilization of currently non-runnable tasks on that CPU. It represents the amount of CPU capacity currently used by CFS tasks in the range [0..max CPU capacity] with max CPU capacity being the CPU capacity at f_max.
The estimated CPU utilization is defined as the maximum between CPU utilization and sum of the estimated utilization of the currently runnable tasks on that CPU. It preserves a utilization “snapshot” of previously-executed tasks, which helps better deduce how busy a CPU will be when a long-sleeping task wakes up. The contribution to CPU utilization of such a task would be significantly decayed at this point of time.
Boosted CPU utilization is defined as max(CPU runnable, CPU utilization).
CPU contention for CFS tasks can be detected by CPU runnable > CPU
utilization. Boosting is implemented in cpu_util() so that internal
users (e.g. EAS) can use it next to external users (e.g. schedutil),
latter via cpu_util_cfs_boost().
CPU utilization can be higher than the current CPU capacity (f_curr/f_max * max CPU capacity) or even the max CPU capacity because of rounding errors as well as task migrations or wakeups of new tasks. CPU utilization has to be capped to fit into the [0..max CPU capacity] range. Otherwise a group of CPUs (CPU0 util = 121% + CPU1 util = 80%) could be seen as over-utilized even though CPU1 has 20% of spare CPU capacity. CPU utilization is allowed to overshoot current CPU capacity though since this is useful for predicting the CPU capacity required after task migrations (scheduler-driven DVFS).
Return
(Boosted) (estimated) utilization for the specified CPU.
-
bool sched_use_asym_prio(struct sched_domain *sd, int cpu)¶
Check whether asym_packing priority must be used
Parameters
struct sched_domain *sdThe scheduling domain of the load balancing
int cpuA CPU
Description
Always use CPU priority when balancing load between SMT siblings. When balancing load between cores, it is not sufficient that cpu is idle. Only use CPU priority if the whole core is idle.
Return
True if the priority of cpu must be followed. False otherwise.
-
bool sched_group_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group *group)¶
Check if the destination CPU can do asym_packing balance
Parameters
struct lb_env *envThe load balancing environment
struct sg_lb_stats *sgsLoad-balancing statistics of the candidate busiest group
struct sched_group *groupThe candidate busiest group
Description
env::dst_cpu can do asym_packing if it has higher priority than the preferred CPU of group.
Return
true if env::dst_cpu can do with asym_packing load balance. False otherwise.
-
void update_sg_lb_stats(struct lb_env *env, struct sd_lb_stats *sds, struct sched_group *group, struct sg_lb_stats *sgs, bool *sg_overloaded, bool *sg_overutilized)¶
Update sched_group’s statistics for load balancing.
Parameters
struct lb_env *envThe load balancing environment.
struct sd_lb_stats *sdsLoad-balancing data with statistics of the local group.
struct sched_group *groupsched_group whose statistics are to be updated.
struct sg_lb_stats *sgsvariable to hold the statistics for this group.
bool *sg_overloadedsched_group is overloaded
bool *sg_overutilizedsched_group is overutilized
-
bool update_sd_pick_busiest(struct lb_env *env, struct sd_lb_stats *sds, struct sched_group *sg, struct sg_lb_stats *sgs)¶
return 1 on busiest group
Parameters
struct lb_env *envThe load balancing environment.
struct sd_lb_stats *sdssched_domain statistics
struct sched_group *sgsched_group candidate to be checked for being the busiest
struct sg_lb_stats *sgssched_group statistics
Description
Determine if sg is a busier group than the previously selected busiest group.
Return
true if sg is a busier group than the previously selected
busiest group. false otherwise.
-
int idle_cpu_without(int cpu, struct task_struct *p)¶
would a given CPU be idle without p ?
Parameters
int cputhe processor on which idleness is tested.
struct task_struct *ptask which should be ignored.
Return
1 if the CPU would be idle. 0 otherwise.
-
void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sds)¶
Update sched_domain’s statistics for load balancing.
Parameters
struct lb_env *envThe load balancing environment.
struct sd_lb_stats *sdsvariable to hold the statistics for this sched_domain.
-
void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)¶
Calculate the amount of imbalance present within the groups of a given sched_domain during load balance.
Parameters
struct lb_env *envload balance environment
struct sd_lb_stats *sdsstatistics of the sched_domain whose imbalance is to be calculated.
-
struct sched_group *sched_balance_find_src_group(struct lb_env *env)¶
Returns the busiest group within the sched_domain if there is an imbalance.
Parameters
struct lb_env *envThe load balancing environment.
Description
Also calculates the amount of runnable load which should be moved to restore balance.
Return
The busiest group if imbalance exists.
-
DECLARE_COMPLETION¶
DECLARE_COMPLETION (work)
declare and initialize a completion structure
Parameters
workidentifier for the completion structure
Description
This macro declares and initializes a completion structure. Generally used for static declarations. You should use the _ONSTACK variant for automatic variables.
-
DECLARE_COMPLETION_ONSTACK¶
DECLARE_COMPLETION_ONSTACK (work)
declare and initialize a completion structure
Parameters
workidentifier for the completion structure
Description
This macro declares and initializes a completion structure on the kernel stack.
-
void init_completion(struct completion *x)¶
Initialize a dynamically allocated completion
Parameters
struct completion *xpointer to completion structure that is to be initialized
Description
This inline function will initialize a dynamically created completion structure.
-
void reinit_completion(struct completion *x)¶
reinitialize a completion structure
Parameters
struct completion *xpointer to completion structure that is to be reinitialized
Description
This inline function should be used to reinitialize a completion structure so it can be reused. This is especially important after complete_all() is used.
Time and timer routines¶
-
u64 get_jiffies_64(void)¶
read the 64-bit non-atomic jiffies_64 value
Parameters
voidno arguments
Description
When BITS_PER_LONG < 64, this uses sequence number sampling using jiffies_lock to protect the 64-bit read.
Return
current 64-bit jiffies value
-
time_after¶
time_after (a, b)
returns true if the time a is after time b.
Parameters
afirst comparable as unsigned long
bsecond comparable as unsigned long
Description
Do this with “<0” and “>=0” to only test the sign of the result. A good compiler would generate better code (and a really good compiler wouldn’t care). Gcc is currently neither.
Return
true is time a is after time b, otherwise false.
-
time_before¶
time_before (a, b)
returns true if the time a is before time b.
Parameters
afirst comparable as unsigned long
bsecond comparable as unsigned long
Return
true is time a is before time b, otherwise false.
-
time_after_eq¶
time_after_eq (a, b)
returns true if the time a is after or the same as time b.
Parameters
afirst comparable as unsigned long
bsecond comparable as unsigned long
Return
true is time a is after or the same as time b, otherwise false.
-
time_before_eq¶
time_before_eq (a, b)
returns true if the time a is before or the same as time b.
Parameters
afirst comparable as unsigned long
bsecond comparable as unsigned long
Return
true is time a is before or the same as time b, otherwise false.
-
time_in_range¶
time_in_range (a, b, c)
Calculate whether a is in the range of [b, c].
Parameters
atime to test
bbeginning of the range
cend of the range
Return
true is time a is in the range [b, c], otherwise false.
-
time_in_range_open¶
time_in_range_open (a, b, c)
Calculate whether a is in the range of [b, c).
Parameters
atime to test
bbeginning of the range
cend of the range
Return
true is time a is in the range [b, c), otherwise false.
-
time_after64¶
time_after64 (a, b)
returns true if the time a is after time b.
Parameters
afirst comparable as __u64
bsecond comparable as __u64
Description
This must be used when utilizing jiffies_64 (i.e. return value of
get_jiffies_64()).
Return
true is time a is after time b, otherwise false.
-
time_before64¶
time_before64 (a, b)
returns true if the time a is before time b.
Parameters
afirst comparable as __u64
bsecond comparable as __u64
Description
This must be used when utilizing jiffies_64 (i.e. return value of
get_jiffies_64()).
Return
true is time a is before time b, otherwise false.
-
time_after_eq64¶
time_after_eq64 (a, b)
returns true if the time a is after or the same as time b.
Parameters
afirst comparable as __u64
bsecond comparable as __u64
Description
This must be used when utilizing jiffies_64 (i.e. return value of
get_jiffies_64()).
Return
true is time a is after or the same as time b, otherwise false.
-
time_before_eq64¶
time_before_eq64 (a, b)
returns true if the time a is before or the same as time b.
Parameters
afirst comparable as __u64
bsecond comparable as __u64
Description
This must be used when utilizing jiffies_64 (i.e. return value of
get_jiffies_64()).
Return
true is time a is before or the same as time b, otherwise false.
-
time_in_range64¶
time_in_range64 (a, b, c)
Calculate whether a is in the range of [b, c].
Parameters
atime to test
bbeginning of the range
cend of the range
Return
true is time a is in the range [b, c], otherwise false.
-
time_is_before_jiffies¶
time_is_before_jiffies (a)
return true if a is before jiffies
Parameters
atime (unsigned long) to compare to jiffies
Return
true is time a is before jiffies, otherwise false.
-
time_is_before_jiffies64¶
time_is_before_jiffies64 (a)
return true if a is before jiffies_64
Parameters
atime (__u64) to compare to jiffies_64
Return
true is time a is before jiffies_64, otherwise false.
-
time_is_after_jiffies¶
time_is_after_jiffies (a)
return true if a is after jiffies
Parameters
atime (unsigned long) to compare to jiffies
Return
true is time a is after jiffies, otherwise false.
-
time_is_after_jiffies64¶
time_is_after_jiffies64 (a)
return true if a is after jiffies_64
Parameters
atime (__u64) to compare to jiffies_64
Return
true is time a is after jiffies_64, otherwise false.
-
time_is_before_eq_jiffies¶
time_is_before_eq_jiffies (a)
return true if a is before or equal to jiffies
Parameters
atime (unsigned long) to compare to jiffies
Return
true is time a is before or the same as jiffies, otherwise false.
-
time_is_before_eq_jiffies64¶
time_is_before_eq_jiffies64 (a)
return true if a is before or equal to jiffies_64
Parameters
atime (__u64) to compare to jiffies_64
Return
true is time a is before or the same jiffies_64, otherwise false.
-
time_is_after_eq_jiffies¶
time_is_after_eq_jiffies (a)
return true if a is after or equal to jiffies
Parameters
atime (unsigned long) to compare to jiffies
Return
true is time a is after or the same as jiffies, otherwise false.
-
time_is_after_eq_jiffies64¶
time_is_after_eq_jiffies64 (a)
return true if a is after or equal to jiffies_64
Parameters
atime (__u64) to compare to jiffies_64
Return
true is time a is after or the same as jiffies_64, otherwise false.
-
u64 jiffies_to_nsecs(const unsigned long j)¶
Convert jiffies to nanoseconds
Parameters
const unsigned long jjiffies value
Return
nanoseconds value
-
unsigned long msecs_to_jiffies(const unsigned int m)¶
convert milliseconds to jiffies
Parameters
const unsigned int mtime in milliseconds
Description
conversion is done as follows:
negative values mean ‘infinite timeout’ (MAX_JIFFY_OFFSET)
‘too large’ values [that would result in larger than MAX_JIFFY_OFFSET values] mean ‘infinite timeout’ too.
all other values are converted to jiffies by either multiplying the input value by a factor or dividing it with a factor and handling any 32-bit overflows. for the details see _msecs_to_jiffies()
msecs_to_jiffies() checks for the passed in value being a constant
via __builtin_constant_p() allowing gcc to eliminate most of the
code. __msecs_to_jiffies() is called if the value passed does not
allow constant folding and the actual conversion must be done at
runtime.
The HZ range specific helpers _msecs_to_jiffies() are called both
directly here and from __msecs_to_jiffies() in the case where
constant folding is not possible.
Return
jiffies value
-
secs_to_jiffies¶
secs_to_jiffies (_secs)
convert seconds to jiffies
Parameters
_secstime in seconds
Description
Conversion is done by simple multiplication with HZ
secs_to_jiffies() is defined as a macro rather than a static inline
function so it can be used in static initializers.
Return
jiffies value
-
unsigned long usecs_to_jiffies(const unsigned int u)¶
convert microseconds to jiffies
Parameters
const unsigned int utime in microseconds
Description
conversion is done as follows:
‘too large’ values [that would result in larger than MAX_JIFFY_OFFSET values] mean ‘infinite timeout’ too.
all other values are converted to jiffies by either multiplying the input value by a factor or dividing it with a factor and handling any 32-bit overflows as for msecs_to_jiffies.
usecs_to_jiffies() checks for the passed in value being a constant
via __builtin_constant_p() allowing gcc to eliminate most of the
code. __usecs_to_jiffies() is called if the value passed does not
allow constant folding and the actual conversion must be done at
runtime.
The HZ range specific helpers _usecs_to_jiffies() are called both
directly here and from __msecs_to_jiffies() in the case where
constant folding is not possible.
Return
jiffies value
-
unsigned int jiffies_to_msecs(const unsigned long j)¶
Convert jiffies to milliseconds
Parameters
const unsigned long jjiffies value
Description
Avoid unnecessary multiplications/divisions in the two most common HZ cases.
Return
milliseconds value
-
unsigned int jiffies_to_usecs(const unsigned long j)¶
Convert jiffies to microseconds
Parameters
const unsigned long jjiffies value
Return
microseconds value
-
time64_t mktime64(const unsigned int year0, const unsigned int mon0, const unsigned int day, const unsigned int hour, const unsigned int min, const unsigned int sec)¶
Converts date to seconds.
Parameters
const unsigned int year0year to convert
const unsigned int mon0month to convert
const unsigned int dayday to convert
const unsigned int hourhour to convert
const unsigned int minminute to convert
const unsigned int secsecond to convert
Description
Converts Gregorian date to seconds since 1970-01-01 00:00:00. Assumes input in normal date format, i.e. 1980-12-31 23:59:59 => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
[For the Julian calendar (which was used in Russia before 1917, Britain & colonies before 1752, anywhere else before 1582, and is still in use by some communities) leave out the -year/100+year/400 terms, and add 10.]
This algorithm was first published by Gauss (I think).
A leap second can be indicated by calling this function with sec as 60 (allowable under ISO 8601). The leap second is treated the same as the following second since they don’t exist in UNIX time.
An encoding of midnight at the end of the day as 24:00:00 - ie. midnight tomorrow - (allowable under ISO 8601) is supported.
Return
seconds since the epoch time for the given input date
-
void set_normalized_timespec64(struct timespec64 *ts, time64_t sec, s64 nsec)¶
set timespec sec and nsec parts and normalize
Parameters
struct timespec64 *tspointer to timespec variable to be set
time64_t secseconds to set
s64 nsecnanoseconds to set
Description
Set seconds and nanoseconds field of a timespec variable and normalize to the timespec storage format
Note
The tv_nsec part is always in the range of 0 <= tv_nsec < NSEC_PER_SEC. For negative values only the tv_sec field is negative !
-
struct timespec64 ns_to_timespec64(s64 nsec)¶
Convert nanoseconds to timespec64
Parameters
s64 nsecthe nanoseconds value to be converted
Return
the timespec64 representation of the nsec parameter.
-
unsigned long __msecs_to_jiffies(const unsigned int m)¶
convert milliseconds to jiffies
Parameters
const unsigned int mtime in milliseconds
Description
conversion is done as follows:
negative values mean ‘infinite timeout’ (MAX_JIFFY_OFFSET)
‘too large’ values [that would result in larger than MAX_JIFFY_OFFSET values] mean ‘infinite timeout’ too.
all other values are converted to jiffies by either multiplying the input value by a factor or dividing it with a factor and handling any 32-bit overflows. for the details see _msecs_to_jiffies()
msecs_to_jiffies() checks for the passed in value being a constant
via __builtin_constant_p() allowing gcc to eliminate most of the
code, __msecs_to_jiffies() is called if the value passed does not
allow constant folding and the actual conversion must be done at
runtime.
The _msecs_to_jiffies helpers are the HZ dependent conversion
routines found in include/linux/jiffies.h
Return
jiffies value
-
unsigned long __usecs_to_jiffies(const unsigned int u)¶
convert microseconds to jiffies
Parameters
const unsigned int utime in milliseconds
Return
jiffies value
-
unsigned long timespec64_to_jiffies(const struct timespec64 *value)¶
convert a timespec64 value to jiffies
Parameters
const struct timespec64 *valuepointer to
struct timespec64
Description
The TICK_NSEC - 1 rounds up the value to the next resolution. Note that a remainder subtract here would not do the right thing as the resolution values don’t fall on second boundaries. I.e. the line: nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding. Note that due to the small error in the multiplier here, this rounding is incorrect for sufficiently large values of tv_nsec, but well formed timespecs should have tv_nsec < NSEC_PER_SEC, so we’re OK.
Rather, we just shift the bits off the right.
The >> (NSEC_JIFFIE_SC - SEC_JIFFIE_SC) converts the scaled nsec value to a scaled second value.
Return
jiffies value
-
void jiffies_to_timespec64(const unsigned long jiffies, struct timespec64 *value)¶
convert jiffies value to
struct timespec64
Parameters
const unsigned long jiffiesjiffies value
struct timespec64 *valuepointer to
struct timespec64
-
clock_t jiffies_to_clock_t(unsigned long x)¶
Convert jiffies to clock_t
Parameters
unsigned long xjiffies value
Return
jiffies converted to clock_t (CLOCKS_PER_SEC)
-
unsigned long clock_t_to_jiffies(unsigned long x)¶
Convert clock_t to jiffies
Parameters
unsigned long xclock_t value
Return
clock_t value converted to jiffies
-
u64 jiffies_64_to_clock_t(u64 x)¶
Convert jiffies_64 to clock_t
Parameters
u64 xjiffies_64 value
Return
jiffies_64 value converted to 64-bit “clock_t” (CLOCKS_PER_SEC)
-
u64 jiffies64_to_nsecs(u64 j)¶
Convert jiffies64 to nanoseconds
Parameters
u64 jjiffies64 value
Return
nanoseconds value
-
u64 jiffies64_to_msecs(const u64 j)¶
Convert jiffies64 to milliseconds
Parameters
const u64 jjiffies64 value
Return
milliseconds value
-
u64 nsecs_to_jiffies64(u64 n)¶
Convert nsecs in u64 to jiffies64
Parameters
u64 nnsecs in u64
Description
Unlike {m,u}secs_to_jiffies, type of input is not unsigned int but u64. And this doesn’t return MAX_JIFFY_OFFSET since this function is designed for scheduler, not for use in device drivers to calculate timeout value.
note
NSEC_PER_SEC = 10^9 = (5^9 * 2^9) = (1953125 * 512) ULLONG_MAX ns = 18446744073.709551615 secs = about 584 years
Return
nsecs converted to jiffies64 value
-
unsigned long nsecs_to_jiffies(u64 n)¶
Convert nsecs in u64 to jiffies
Parameters
u64 nnsecs in u64
Description
Unlike {m,u}secs_to_jiffies, type of input is not unsigned int but u64. And this doesn’t return MAX_JIFFY_OFFSET since this function is designed for scheduler, not for use in device drivers to calculate timeout value.
note
NSEC_PER_SEC = 10^9 = (5^9 * 2^9) = (1953125 * 512) ULLONG_MAX ns = 18446744073.709551615 secs = about 584 years
Return
nsecs converted to jiffies value
-
int get_timespec64(struct timespec64 *ts, const struct __kernel_timespec __user *uts)¶
get user’s time value into kernel space
Parameters
struct timespec64 *tsdestination
struct timespec64const struct __kernel_timespec __user *utsuser’s time value as
struct __kernel_timespec
Description
Handles compat or 32-bit modes.
Return
0 on success or negative errno on error
-
int put_timespec64(const struct timespec64 *ts, struct __kernel_timespec __user *uts)¶
convert timespec64 value to __kernel_timespec format and copy the latter to userspace
Parameters
const struct timespec64 *tsinput
struct timespec64struct __kernel_timespec __user *utsuser’s
struct __kernel_timespec
Return
0 on success or negative errno on error
-
int get_old_timespec32(struct timespec64 *ts, const void __user *uts)¶
get user’s old-format time value into kernel space
Parameters
struct timespec64 *tsdestination
struct timespec64const void __user *utsuser’s old-format time value (
struct old_timespec32)
Description
Handles X86_X32_ABI compatibility conversion.
Return
0 on success or negative errno on error
-
int put_old_timespec32(const struct timespec64 *ts, void __user *uts)¶
convert timespec64 value to
struct old_timespec32and copy the latter to userspace
Parameters
const struct timespec64 *tsinput
struct timespec64void __user *utsuser’s
struct old_timespec32
Description
Handles X86_X32_ABI compatibility conversion.
Return
0 on success or negative errno on error
-
int get_itimerspec64(struct itimerspec64 *it, const struct __kernel_itimerspec __user *uit)¶
get user’s
struct __kernel_itimerspecinto kernel space
Parameters
struct itimerspec64 *itdestination
struct itimerspec64const struct __kernel_itimerspec __user *uituser’s
struct __kernel_itimerspec
Return
0 on success or negative errno on error
-
int put_itimerspec64(const struct itimerspec64 *it, struct __kernel_itimerspec __user *uit)¶
convert
struct itimerspec64to __kernel_itimerspec format and copy the latter to userspace
Parameters
const struct itimerspec64 *itinput
struct itimerspec64struct __kernel_itimerspec __user *uituser’s
struct __kernel_itimerspec
Return
0 on success or negative errno on error
-
int get_old_itimerspec32(struct itimerspec64 *its, const struct old_itimerspec32 __user *uits)¶
get user’s
struct old_itimerspec32into kernel space
Parameters
struct itimerspec64 *itsdestination
struct itimerspec64const struct old_itimerspec32 __user *uitsuser’s
struct old_itimerspec32
Return
0 on success or negative errno on error
-
int put_old_itimerspec32(const struct itimerspec64 *its, struct old_itimerspec32 __user *uits)¶
convert
struct itimerspec64tostruct old_itimerspec32and copy the latter to userspace
Parameters
const struct itimerspec64 *itsinput
struct itimerspec64struct old_itimerspec32 __user *uitsuser’s
struct old_itimerspec32
Return
0 on success or negative errno on error
-
unsigned long __round_jiffies(unsigned long j, int cpu)¶
function to round jiffies to a full second
Parameters
unsigned long jthe time in (absolute) jiffies that should be rounded
int cputhe processor number on which the timeout will happen
Description
__round_jiffies() rounds an absolute time in the future (in jiffies)
up or down to (approximately) full seconds. This is useful for timers
for which the exact time they fire does not matter too much, as long as
they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.
The return value is the rounded version of the j parameter.
-
unsigned long __round_jiffies_relative(unsigned long j, int cpu)¶
function to round jiffies to a full second
Parameters
unsigned long jthe time in (relative) jiffies that should be rounded
int cputhe processor number on which the timeout will happen
Description
__round_jiffies_relative() rounds a time delta in the future (in jiffies)
up or down to (approximately) full seconds. This is useful for timers
for which the exact time they fire does not matter too much, as long as
they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.
The return value is the rounded version of the j parameter.
-
unsigned long round_jiffies(unsigned long j)¶
function to round jiffies to a full second
Parameters
unsigned long jthe time in (absolute) jiffies that should be rounded
Description
round_jiffies() rounds an absolute time in the future (in jiffies)
up or down to (approximately) full seconds. This is useful for timers
for which the exact time they fire does not matter too much, as long as
they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The return value is the rounded version of the j parameter.
-
unsigned long round_jiffies_relative(unsigned long j)¶
function to round jiffies to a full second
Parameters
unsigned long jthe time in (relative) jiffies that should be rounded
Description
round_jiffies_relative() rounds a time delta in the future (in jiffies)
up or down to (approximately) full seconds. This is useful for timers
for which the exact time they fire does not matter too much, as long as
they fire approximately every X seconds.
By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.
The return value is the rounded version of the j parameter.
-
unsigned long __round_jiffies_up(unsigned long j, int cpu)¶
function to round jiffies up to a full second
Parameters
unsigned long jthe time in (absolute) jiffies that should be rounded
int cputhe processor number on which the timeout will happen
Description
This is the same as __round_jiffies() except that it will never
round down. This is useful for timeouts for which the exact time
of firing does not matter too much, as long as they don’t fire too
early.
-
unsigned long __round_jiffies_up_relative(unsigned long j, int cpu)¶
function to round jiffies up to a full second
Parameters
unsigned long jthe time in (relative) jiffies that should be rounded
int cputhe processor number on which the timeout will happen
Description
This is the same as __round_jiffies_relative() except that it will never
round down. This is useful for timeouts for which the exact time
of firing does not matter too much, as long as they don’t fire too
early.
-
unsigned long round_jiffies_up(unsigned long j)¶
function to round jiffies up to a full second
Parameters
unsigned long jthe time in (absolute) jiffies that should be rounded
Description
This is the same as round_jiffies() except that it will never
round down. This is useful for timeouts for which the exact time
of firing does not matter too much, as long as they don’t fire too
early.
-
unsigned long round_jiffies_up_relative(unsigned long j)¶
function to round jiffies up to a full second
Parameters
unsigned long jthe time in (relative) jiffies that should be rounded
Description
This is the same as round_jiffies_relative() except that it will never
round down. This is useful for timeouts for which the exact time
of firing does not matter too much, as long as they don’t fire too
early.
-
void init_timer_key(struct timer_list *timer, void (*func)(struct timer_list*), unsigned int flags, const char *name, struct lock_class_key *key)¶
initialize a timer
Parameters
struct timer_list *timerthe timer to be initialized
void (*func)(struct timer_list *)timer callback function
unsigned int flagstimer flags
const char *namename of the timer
struct lock_class_key *keylockdep class key of the fake lock used for tracking timer sync lock dependencies
Description
init_timer_key() must be done to a timer prior to calling any of the
other timer functions.
-
int mod_timer_pending(struct timer_list *timer, unsigned long expires)¶
Modify a pending timer’s timeout
Parameters
struct timer_list *timerThe pending timer to be modified
unsigned long expiresNew absolute timeout in jiffies
Description
mod_timer_pending() is the same for pending timers as mod_timer(), but
will not activate inactive timers.
If timer->function == NULL then the start operation is silently discarded.
Return
0- The timer was inactive and not modified or was inshutdown state and the operation was discarded
1- The timer was active and requeued to expire at expires
-
int mod_timer(struct timer_list *timer, unsigned long expires)¶
Modify a timer’s timeout
Parameters
struct timer_list *timerThe timer to be modified
unsigned long expiresNew absolute timeout in jiffies
Description
mod_timer(timer, expires) is equivalent to:
del_timer(timer); timer->expires = expires; add_timer(timer);
mod_timer() is more efficient than the above open coded sequence. In
case that the timer is inactive, the del_timer() part is a NOP. The
timer is in any case activated with the new expiry time expires.
Note that if there are multiple unserialized concurrent users of the
same timer, then mod_timer() is the only safe way to modify the timeout,
since add_timer() cannot modify an already running timer.
If timer->function == NULL then the start operation is silently discarded. In this case the return value is 0 and meaningless.
Return
0- The timer was inactive and started or was in shutdownstate and the operation was discarded
1- The timer was active and requeued to expire at expires orthe timer was active and not modified because expires did not change the effective expiry time
-
int timer_reduce(struct timer_list *timer, unsigned long expires)¶
Modify a timer’s timeout if it would reduce the timeout
Parameters
struct timer_list *timerThe timer to be modified
unsigned long expiresNew absolute timeout in jiffies
Description
timer_reduce() is very similar to mod_timer(), except that it will only
modify an enqueued timer if that would reduce the expiration time. If
timer is not enqueued it starts the timer.
If timer->function == NULL then the start operation is silently discarded.
Return
0- The timer was inactive and started or was in shutdownstate and the operation was discarded
1- The timer was active and requeued to expire at expires orthe timer was active and not modified because expires did not change the effective expiry time such that the timer would expire earlier than already scheduled
-
void add_timer(struct timer_list *timer)¶
Start a timer
Parameters
struct timer_list *timerThe timer to be started
Description
Start timer to expire at timer->expires in the future. timer->expires is the absolute expiry time measured in ‘jiffies’. When the timer expires timer->function(timer) will be invoked from soft interrupt context.
The timer->expires and timer->function fields must be set prior to calling this function.
If timer->function == NULL then the start operation is silently discarded.
If timer->expires is already in the past timer will be queued to expire at the next timer tick.
This can only operate on an inactive timer. Attempts to invoke this on an active timer are rejected with a warning.
-
void add_timer_local(struct timer_list *timer)¶
Start a timer on the local CPU
Parameters
struct timer_list *timerThe timer to be started
Description
Same as add_timer() except that the timer flag TIMER_PINNED is set.
See add_timer() for further details.
-
void add_timer_global(struct timer_list *timer)¶
Start a timer without TIMER_PINNED flag set
Parameters
struct timer_list *timerThe timer to be started
Description
Same as add_timer() except that the timer flag TIMER_PINNED is unset.
See add_timer() for further details.
-
void add_timer_on(struct timer_list *timer, int cpu)¶
Start a timer on a particular CPU
Parameters
struct timer_list *timerThe timer to be started
int cpuThe CPU to start it on
Description
Same as add_timer() except that it starts the timer on the given CPU and
the TIMER_PINNED flag is set. When timer shouldn’t be a pinned timer in
the next round, add_timer_global() should be used instead as it unsets
the TIMER_PINNED flag.
See add_timer() for further details.
-
int timer_delete(struct timer_list *timer)¶
Deactivate a timer
Parameters
struct timer_list *timerThe timer to be deactivated
Description
The function only deactivates a pending timer, but contrary to
timer_delete_sync() it does not take into account whether the timer’s
callback function is concurrently executed on a different CPU or not.
It neither prevents rearming of the timer. If timer can be rearmed
concurrently then the return value of this function is meaningless.
Return
0- The timer was not pending1- The timer was pending and deactivated
-
int timer_shutdown(struct timer_list *timer)¶
Deactivate a timer and prevent rearming
Parameters
struct timer_list *timerThe timer to be deactivated
Description
The function does not wait for an eventually running timer callback on a different CPU but it prevents rearming of the timer. Any attempt to arm timer after this function returns will be silently ignored.
This function is useful for teardown code and should only be used when
timer_shutdown_sync() cannot be invoked due to locking or context constraints.
Return
0- The timer was not pending1- The timer was pending
-
int try_to_del_timer_sync(struct timer_list *timer)¶
Try to deactivate a timer
Parameters
struct timer_list *timerTimer to deactivate
Description
This function tries to deactivate a timer. On success the timer is not queued and the timer callback function is not running on any CPU.
This function does not guarantee that the timer cannot be rearmed right after dropping the base lock. That needs to be prevented by the calling code if necessary.
Return
0- The timer was not pending1- The timer was pending and deactivated-1- The timer callback function is running on a different CPU
-
int timer_delete_sync(struct timer_list *timer)¶
Deactivate a timer and wait for the handler to finish.
Parameters
struct timer_list *timerThe timer to be deactivated
Description
Synchronization rules: Callers must prevent restarting of the timer,
otherwise this function is meaningless. It must not be called from
interrupt contexts unless the timer is an irqsafe one. The caller must
not hold locks which would prevent completion of the timer’s callback
function. The timer’s handler must not call add_timer_on(). Upon exit
the timer is not queued and the handler is not running on any CPU.
For !irqsafe timers, the caller must not hold locks that are held in interrupt context. Even if the lock has nothing to do with the timer in question. Here’s why:
CPU0 CPU1
---- ----
<SOFTIRQ>
call_timer_fn();
base->running_timer = mytimer;
spin_lock_irq(somelock);
<IRQ>
spin_lock(somelock);
timer_delete_sync(mytimer);
while (base->running_timer == mytimer);
Now timer_delete_sync() will never return and never release somelock.
The interrupt on the other CPU is waiting to grab somelock but it has
interrupted the softirq that CPU0 is waiting to finish.
This function cannot guarantee that the timer is not rearmed again by some concurrent or preempting code, right after it dropped the base lock. If there is the possibility of a concurrent rearm then the return value of the function is meaningless.
If such a guarantee is needed, e.g. for teardown situations then use
timer_shutdown_sync() instead.
Return
0- The timer was not pending1- The timer was pending and deactivated
-
int timer_shutdown_sync(struct timer_list *timer)¶
Shutdown a timer and prevent rearming
Parameters
struct timer_list *timerThe timer to be shutdown
Description
- When the function returns it is guaranteed that:
timer is not queued
The callback function of timer is not running
timer cannot be enqueued again. Any attempt to rearm timer is silently ignored.
See timer_delete_sync() for synchronization rules.
This function is useful for final teardown of an infrastructure where the timer is subject to a circular dependency problem.
A common pattern for this is a timer and a workqueue where the timer can
schedule work and work can arm the timer. On shutdown the workqueue must
be destroyed and the timer must be prevented from rearming. Unless the
code has conditionals like ‘if (mything->in_shutdown)’ to prevent that
there is no way to get this correct with timer_delete_sync().
timer_shutdown_sync() is solving the problem. The correct ordering of
calls in this case is:
timer_shutdown_sync(
mything->timer); workqueue_destroy(mything->workqueue);
After this ‘mything’ can be safely freed.
This obviously implies that the timer is not required to be functional for the rest of the shutdown operation.
Return
0- The timer was not pending1- The timer was pending
High-resolution timers¶
-
ktime_t ktime_set(const s64 secs, const unsigned long nsecs)¶
Set a ktime_t variable from a seconds/nanoseconds value
Parameters
const s64 secsseconds to set
const unsigned long nsecsnanoseconds to set
Return
The ktime_t representation of the value.
-
int ktime_compare(const ktime_t cmp1, const ktime_t cmp2)¶
Compares two ktime_t variables for less, greater or equal
Parameters
const ktime_t cmp1comparable1
const ktime_t cmp2comparable2
Return
- ...
cmp1 < cmp2: return <0 cmp1 == cmp2: return 0 cmp1 > cmp2: return >0
-
bool ktime_after(const ktime_t cmp1, const ktime_t cmp2)¶
Compare if a ktime_t value is bigger than another one.
Parameters
const ktime_t cmp1comparable1
const ktime_t cmp2comparable2
Return
true if cmp1 happened after cmp2.
-
bool ktime_before(const ktime_t cmp1, const ktime_t cmp2)¶
Compare if a ktime_t value is smaller than another one.
Parameters
const ktime_t cmp1comparable1
const ktime_t cmp2comparable2
Return
true if cmp1 happened before cmp2.
-
bool ktime_to_timespec64_cond(const ktime_t kt, struct timespec64 *ts)¶
convert a ktime_t variable to timespec64 format only if the variable contains data
Parameters
const ktime_t ktthe ktime_t variable to convert
struct timespec64 *tsthe timespec variable to store the result in
Return
true if there was a successful conversion, false if kt was 0.
-
struct hrtimer_sleeper¶
simple sleeper structure
Definition:
struct hrtimer_sleeper {
struct hrtimer timer;
struct task_struct *task;
};
Members
timerembedded timer structure
tasktask to wake up
Description
task is set to NULL, when the timer expires.
-
void hrtimer_start(struct hrtimer *timer, ktime_t tim, const enum hrtimer_mode mode)¶
(re)start an hrtimer
Parameters
struct hrtimer *timerthe timer to be added
ktime_t timexpiry time
const enum hrtimer_mode modetimer mode: absolute (HRTIMER_MODE_ABS) or relative (HRTIMER_MODE_REL), and pinned (HRTIMER_MODE_PINNED); softirq based mode is considered for debug purpose only!
-
ktime_t hrtimer_get_remaining(const struct hrtimer *timer)¶
get remaining time for the timer
Parameters
const struct hrtimer *timerthe timer to read
-
bool hrtimer_is_queued(struct hrtimer *timer)¶
check, whether the timer is on one of the queues
Parameters
struct hrtimer *timerTimer to check
Return
True if the timer is queued, false otherwise
Description
The function can be used lockless, but it gives only a current snapshot.
-
void hrtimer_update_function(struct hrtimer *timer, enum hrtimer_restart (*function)(struct hrtimer*))¶
Update the timer’s callback function
Parameters
struct hrtimer *timerTimer to update
enum hrtimer_restart (*function)(struct hrtimer *)New callback function
Description
Only safe to call if the timer is not enqueued. Can be called in the callback function if the timer is not enqueued at the same time (see the comments above HRTIMER_STATE_ENQUEUED).
-
u64 hrtimer_forward_now(struct hrtimer *timer, ktime_t interval)¶
forward the timer expiry so it expires after now
Parameters
struct hrtimer *timerhrtimer to forward
ktime_t intervalthe interval to forward
Description
It is a variant of hrtimer_forward(). The timer will expire after the current
time of the hrtimer clock base. See hrtimer_forward() for details.
-
u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval)¶
forward the timer expiry
Parameters
struct hrtimer *timerhrtimer to forward
ktime_t nowforward past this time
ktime_t intervalthe interval to forward
Description
Forward the timer expiry so it will expire in the future.
Note
This only updates the timer expiry value and does not requeue the timer.
There is also a variant of the function hrtimer_forward_now().
Context
Can be safely called from the callback function of timer. If called from other contexts timer must neither be enqueued nor running the callback and the caller needs to take care of serialization.
Return
The number of overruns are returned.
-
void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, u64 delta_ns, const enum hrtimer_mode mode)¶
(re)start an hrtimer
Parameters
struct hrtimer *timerthe timer to be added
ktime_t timexpiry time
u64 delta_ns“slack” range for the timer
const enum hrtimer_mode modetimer mode: absolute (HRTIMER_MODE_ABS) or relative (HRTIMER_MODE_REL), and pinned (HRTIMER_MODE_PINNED); softirq based mode is considered for debug purpose only!
-
int hrtimer_try_to_cancel(struct hrtimer *timer)¶
try to deactivate a timer
Parameters
struct hrtimer *timerhrtimer to stop
Return
0 when the timer was not active
1 when the timer was active
-1 when the timer is currently executing the callback function and cannot be stopped
-
int hrtimer_cancel(struct hrtimer *timer)¶
cancel a timer and wait for the handler to finish.
Parameters
struct hrtimer *timerthe timer to be cancelled
Return
0 when the timer was not active 1 when the timer was active
-
ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust)¶
get remaining time for the timer
Parameters
const struct hrtimer *timerthe timer to read
bool adjustadjust relative timers when CONFIG_TIME_LOW_RES=y
-
void hrtimer_init(struct hrtimer *timer, clockid_t clock_id, enum hrtimer_mode mode)¶
initialize a timer to the given clock
Parameters
struct hrtimer *timerthe timer to be initialized
clockid_t clock_idthe clock to be used
enum hrtimer_mode modeThe modes which are relevant for initialization: HRTIMER_MODE_ABS, HRTIMER_MODE_REL, HRTIMER_MODE_ABS_SOFT, HRTIMER_MODE_REL_SOFT
The PINNED variants of the above can be handed in, but the PINNED bit is ignored as pinning happens when the hrtimer is started
-
void hrtimer_setup(struct hrtimer *timer, enum hrtimer_restart (*function)(struct hrtimer*), clockid_t clock_id, enum hrtimer_mode mode)¶
initialize a timer to the given clock
Parameters
struct hrtimer *timerthe timer to be initialized
enum hrtimer_restart (*function)(struct hrtimer *)the callback function
clockid_t clock_idthe clock to be used
enum hrtimer_mode modeThe modes which are relevant for initialization: HRTIMER_MODE_ABS, HRTIMER_MODE_REL, HRTIMER_MODE_ABS_SOFT, HRTIMER_MODE_REL_SOFT
The PINNED variants of the above can be handed in, but the PINNED bit is ignored as pinning happens when the hrtimer is started
-
void hrtimer_setup_on_stack(struct hrtimer *timer, enum hrtimer_restart (*function)(struct hrtimer*), clockid_t clock_id, enum hrtimer_mode mode)¶
initialize a timer on stack memory
Parameters
struct hrtimer *timerThe timer to be initialized
enum hrtimer_restart (*function)(struct hrtimer *)the callback function
clockid_t clock_idThe clock to be used
enum hrtimer_mode modeThe timer mode
Description
Similar to hrtimer_setup(), except that this one must be used if struct hrtimer is in stack
memory.
-
void hrtimer_sleeper_start_expires(struct hrtimer_sleeper *sl, enum hrtimer_mode mode)¶
Start a hrtimer sleeper timer
Parameters
struct hrtimer_sleeper *slsleeper to be started
enum hrtimer_mode modetimer mode abs/rel
Description
Wrapper around hrtimer_start_expires() for hrtimer_sleeper based timers to allow PREEMPT_RT to tweak the delivery mode (soft/hardirq context)
-
void hrtimer_setup_sleeper_on_stack(struct hrtimer_sleeper *sl, clockid_t clock_id, enum hrtimer_mode mode)¶
initialize a sleeper in stack memory
Parameters
struct hrtimer_sleeper *slsleeper to be initialized
clockid_t clock_idthe clock to be used
enum hrtimer_mode modetimer mode abs/rel
Wait queues and Wake events¶
-
int waitqueue_active(struct wait_queue_head *wq_head)¶
locklessly test for waiters on the queue
Parameters
struct wait_queue_head *wq_headthe waitqueue to test for waiters
Description
returns true if the wait list is not empty
Use either while holding wait_queue_head::lock or when used for wakeups with an extra smp_mb() like:
CPU0 - waker CPU1 - waiter
for (;;) {
@cond = true; prepare_to_wait(&wq_head, &wait, state);
smp_mb(); // smp_mb() from set_current_state()
if (waitqueue_active(wq_head)) if (@cond)
wake_up(wq_head); break;
schedule();
}
finish_wait(&wq_head, &wait);
Because without the explicit smp_mb() it’s possible for the
waitqueue_active() load to get hoisted over the cond store such that we’ll
observe an empty wait list while the waiter might not observe cond.
Also note that this ‘optimization’ trades a spin_lock() for an smp_mb(), which (when the lock is uncontended) are of roughly equal cost.
NOTE
this function is lockless and requires care, incorrect usage _will_ lead to sporadic and non-obvious failure.
-
bool wq_has_single_sleeper(struct wait_queue_head *wq_head)¶
check if there is only one sleeper
Parameters
struct wait_queue_head *wq_headwait queue head
Description
Returns true of wq_head has only one sleeper on the list.
Please refer to the comment for waitqueue_active.
-
bool wq_has_sleeper(struct wait_queue_head *wq_head)¶
check if there are any waiting processes
Parameters
struct wait_queue_head *wq_headwait queue head
Description
Returns true if wq_head has waiting processes
Please refer to the comment for waitqueue_active.
-
void wake_up_pollfree(struct wait_queue_head *wq_head)¶
signal that a polled waitqueue is going away
Parameters
struct wait_queue_head *wq_headthe wait queue head
Description
In the very rare cases where a ->poll() implementation uses a waitqueue whose
lifetime is tied to a task rather than to the ‘struct file’ being polled,
this function must be called before the waitqueue is freed so that
non-blocking polls (e.g. epoll) are notified that the queue is going away.
The caller must also RCU-delay the freeing of the wait_queue_head, e.g. via
an explicit synchronize_rcu() or call_rcu(), or via SLAB_TYPESAFE_BY_RCU.
-
wait_event¶
wait_event (wq_head, condition)
sleep until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
-
wait_event_freezable¶
wait_event_freezable (wq_head, condition)
sleep (or freeze) until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE -- so as not to contribute to system load) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
-
wait_event_timeout¶
wait_event_timeout (wq_head, condition, timeout)
sleep until a condition gets true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed,
1 if the condition evaluated to true after the timeout elapsed,
or the remaining jiffies (at least 1) if the condition evaluated
to true before the timeout elapsed.
-
wait_event_cmd¶
wait_event_cmd (wq_head, condition, cmd1, cmd2)
sleep until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
cmd1the command will be executed before sleep
cmd2the command will be executed after sleep
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
-
wait_event_interruptible¶
wait_event_interruptible (wq_head, condition)
sleep until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_timeout¶
wait_event_interruptible_timeout (wq_head, condition, timeout)
sleep until a condition gets true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed,
1 if the condition evaluated to true after the timeout elapsed,
the remaining jiffies (at least 1) if the condition evaluated
to true before the timeout elapsed, or -ERESTARTSYS if it was
interrupted by a signal.
-
wait_event_hrtimeout¶
wait_event_hrtimeout (wq_head, condition, timeout)
sleep until a condition gets true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, as a ktime_t
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function returns 0 if condition became true, or -ETIME if the timeout elapsed.
-
wait_event_interruptible_hrtimeout¶
wait_event_interruptible_hrtimeout (wq, condition, timeout)
sleep until a condition gets true or a timeout elapses
Parameters
wqthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, as a ktime_t
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function returns 0 if condition became true, -ERESTARTSYS if it was interrupted by a signal, or -ETIME if the timeout elapsed.
-
wait_event_idle¶
wait_event_idle (wq_head, condition)
wait for a condition without contributing to system load
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_IDLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
-
wait_event_idle_exclusive¶
wait_event_idle_exclusive (wq_head, condition)
wait for a condition with contributing to system load
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_IDLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus if other processes wait on the same list, when this process is woken further processes are not considered.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
-
wait_event_idle_timeout¶
wait_event_idle_timeout (wq_head, condition, timeout)
sleep without load until a condition becomes true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_IDLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed,
1 if the condition evaluated to true after the timeout elapsed,
or the remaining jiffies (at least 1) if the condition evaluated
to true before the timeout elapsed.
-
wait_event_idle_exclusive_timeout¶
wait_event_idle_exclusive_timeout (wq_head, condition, timeout)
sleep without load until a condition becomes true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_IDLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus if other processes wait on the same list, when this process is woken further processes are not considered.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Return
0 if the condition evaluated to false after the timeout elapsed,
1 if the condition evaluated to true after the timeout elapsed,
or the remaining jiffies (at least 1) if the condition evaluated
to true before the timeout elapsed.
-
wait_event_interruptible_locked¶
wait_event_interruptible_locked (wq, condition)
sleep until a condition gets true
Parameters
wqthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_locked_irq¶
wait_event_interruptible_locked_irq (wq, condition)
sleep until a condition gets true
Parameters
wqthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_exclusive_locked¶
wait_event_interruptible_exclusive_locked (wq, condition)
sleep exclusively until a condition gets true
Parameters
wqthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_exclusive_locked_irq¶
wait_event_interruptible_exclusive_locked_irq (wq, condition)
sleep until a condition gets true
Parameters
wqthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.
It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.
The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.
The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.
wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_killable¶
wait_event_killable (wq_head, condition)
sleep until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
Description
The process is put to sleep (TASK_KILLABLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_state¶
wait_event_state (wq_head, condition, state)
sleep until a condition gets true
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
statestate to sleep in
Description
The process is put to sleep (state) until the condition evaluates to true or a signal is received (when allowed by state). The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
The function will return -ERESTARTSYS if it was interrupted by a signal (when allowed by state) and 0 if condition evaluated to true.
-
wait_event_killable_timeout¶
wait_event_killable_timeout (wq_head, condition, timeout)
sleep until a condition gets true or a timeout elapses
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_KILLABLE) until the condition evaluates to true or a kill signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
Only kill signals interrupt this process.
Return
0 if the condition evaluated to false after the timeout elapsed,
1 if the condition evaluated to true after the timeout elapsed,
the remaining jiffies (at least 1) if the condition evaluated
to true before the timeout elapsed, or -ERESTARTSYS if it was
interrupted by a kill signal.
-
wait_event_lock_irq_cmd¶
wait_event_lock_irq_cmd (wq_head, condition, lock, cmd)
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
locka locked spinlock_t, which will be released before cmd and schedule() and reacquired afterwards.
cmda command which is invoked outside the critical section before sleep
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.
-
wait_event_lock_irq¶
wait_event_lock_irq (wq_head, condition, lock)
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
locka locked spinlock_t, which will be released before schedule() and reacquired afterwards.
Description
The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
-
wait_event_interruptible_lock_irq_cmd¶
wait_event_interruptible_lock_irq_cmd (wq_head, condition, lock, cmd)
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
locka locked spinlock_t, which will be released before cmd and schedule() and reacquired afterwards.
cmda command which is invoked outside the critical section before sleep
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.
The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_lock_irq¶
wait_event_interruptible_lock_irq (wq_head, condition, lock)
sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
locka locked spinlock_t, which will be released before schedule() and reacquired afterwards.
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.
-
wait_event_interruptible_lock_irq_timeout¶
wait_event_interruptible_lock_irq_timeout (wq_head, condition, lock, timeout)
sleep until a condition gets true or a timeout elapses. The condition is checked under the lock. This is expected to be called with the lock taken.
Parameters
wq_headthe waitqueue to wait on
conditiona C expression for the event to wait for
locka locked spinlock_t, which will be released before schedule() and reacquired afterwards.
timeouttimeout, in jiffies
Description
The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq_head is woken up.
wake_up() has to be called after changing any variable that could change the result of the wait condition.
This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.
The function returns 0 if the timeout elapsed, -ERESTARTSYS if it was interrupted by a signal, and the remaining jiffies otherwise if the condition evaluated to true before the timeout elapsed.
-
int __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr_exclusive, void *key)¶
wake up threads blocked on a waitqueue.
Parameters
struct wait_queue_head *wq_headthe waitqueue
unsigned int modewhich threads
int nr_exclusivehow many wake-one or wake-many threads to wake up
void *keyis directly passed to the wakeup function
Description
If this function wakes up a task, it executes a full memory barrier before accessing the task state. Returns the number of exclusive tasks that were awaken.
-
void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key)¶
wake up threads blocked on a waitqueue.
Parameters
struct wait_queue_head *wq_headthe waitqueue
unsigned int modewhich threads
void *keyopaque value to be passed to wakeup targets
Description
The sync wakeup differs that the waker knows that it will schedule away soon, so while the target thread will be woken up, it will not be migrated to another CPU - ie. the two threads are ‘synchronized’ with each other. This can prevent needless bouncing between CPUs.
On UP it can prevent extra preemption.
If this function wakes up a task, it executes a full memory barrier before accessing the task state.
-
void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key)¶
wake up a thread blocked on a locked waitqueue.
Parameters
struct wait_queue_head *wq_headthe waitqueue
unsigned int modewhich threads
void *keyopaque value to be passed to wakeup targets
Description
The sync wakeup differs in that the waker knows that it will schedule away soon, so while the target thread will be woken up, it will not be migrated to another CPU - ie. the two threads are ‘synchronized’ with each other. This can prevent needless bouncing between CPUs.
On UP it can prevent extra preemption.
If this function wakes up a task, it executes a full memory barrier before accessing the task state.
-
void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)¶
clean up after waiting in a queue
Parameters
struct wait_queue_head *wq_headwaitqueue waited on
struct wait_queue_entry *wq_entrywait descriptor
Description
Sets current thread back to running state and removes the wait descriptor from the given waitqueue if still queued.
Internal Functions¶
-
int wait_task_stopped(struct wait_opts *wo, int ptrace, struct task_struct *p)¶
Wait for
TASK_STOPPEDorTASK_TRACED
Parameters
struct wait_opts *wowait options
int ptraceis the wait for ptrace
struct task_struct *ptask to wait for
Description
Handle sys_wait4() work for p in state TASK_STOPPED or TASK_TRACED.
Context
read_lock(tasklist_lock), which is released if return value is
non-zero. Also, grabs and releases p->sighand->siglock.
Return
0 if wait condition didn’t exist and search for other wait conditions should continue. Non-zero return, -errno on failure and p’s pid on success, implies that tasklist_lock is released and wait condition search should terminate.
-
bool task_set_jobctl_pending(struct task_struct *task, unsigned long mask)¶
set jobctl pending bits
Parameters
struct task_struct *tasktarget task
unsigned long maskpending bits to set
Description
Clear mask from task->jobctl. mask must be subset of
JOBCTL_PENDING_MASK | JOBCTL_STOP_CONSUME | JOBCTL_STOP_SIGMASK |
JOBCTL_TRAPPING. If stop signo is being set, the existing signo is
cleared. If task is already being killed or exiting, this function
becomes noop.
Context
Must be called with task->sighand->siglock held.
Return
true if mask is set, false if made noop because task was dying.
-
void task_clear_jobctl_trapping(struct task_struct *task)¶
clear jobctl trapping bit
Parameters
struct task_struct *tasktarget task
Description
If JOBCTL_TRAPPING is set, a ptracer is waiting for us to enter TRACED. Clear it and wake up the ptracer. Note that we don’t need any further locking. task->siglock guarantees that task->parent points to the ptracer.
Context
Must be called with task->sighand->siglock held.
-
void task_clear_jobctl_pending(struct task_struct *task, unsigned long mask)¶
clear jobctl pending bits
Parameters
struct task_struct *tasktarget task
unsigned long maskpending bits to clear
Description
Clear mask from task->jobctl. mask must be subset of
JOBCTL_PENDING_MASK. If JOBCTL_STOP_PENDING is being cleared, other
STOP bits are cleared together.
If clearing of mask leaves no stop or trap pending, this function calls
task_clear_jobctl_trapping().
Context
Must be called with task->sighand->siglock held.
-
bool task_participate_group_stop(struct task_struct *task)¶
participate in a group stop
Parameters
struct task_struct *tasktask participating in a group stop
Description
task has JOBCTL_STOP_PENDING set and is participating in a group stop.
Group stop states are cleared and the group stop count is consumed if
JOBCTL_STOP_CONSUME was set. If the consumption completes the group
stop, the appropriate SIGNAL_* flags are set.
Context
Must be called with task->sighand->siglock held.
Return
true if group stop completion should be notified to the parent, false
otherwise.
-
void ptrace_trap_notify(struct task_struct *t)¶
schedule trap to notify ptracer
Parameters
struct task_struct *ttracee wanting to notify tracer
Description
This function schedules sticky ptrace trap which is cleared on the next TRAP_STOP to notify ptracer of an event. t must have been seized by ptracer.
If t is running, STOP trap will be taken. If trapped for STOP and ptracer is listening for events, tracee is woken up so that it can re-trap for the new event. If trapped otherwise, STOP trap will be eventually taken without returning to userland after the existing traps are finished by PTRACE_CONT.
Context
Must be called with task->sighand->siglock held.
-
int force_sig_seccomp(int syscall, int reason, bool force_coredump)¶
signals the task to allow in-process syscall emulation
Parameters
int syscallsyscall number to send to userland
int reasonfilter-supplied reason code to send to userland (via si_errno)
bool force_coredumptrue to trigger a coredump
Description
Forces a SIGSYS with a code of SYS_SECCOMP and related sigsys info.
-
void do_notify_parent_cldstop(struct task_struct *tsk, bool for_ptracer, int why)¶
notify parent of stopped/continued state change
Parameters
struct task_struct *tsktask reporting the state change
bool for_ptracerthe notification is for ptracer
int whyCLD_{CONTINUED|STOPPED|TRAPPED} to report
Description
Notify tsk’s parent that the stopped/continued state has changed. If
for_ptracer is false, tsk’s group leader notifies to its real parent.
If true, tsk reports to tsk->parent which should be the ptracer.
Context
Must be called with tasklist_lock at least read locked.
-
bool do_signal_stop(int signr)¶
handle group stop for SIGSTOP and other stop signals
Parameters
int signrsignr causing group stop if initiating
Description
If JOBCTL_STOP_PENDING is not set yet, initiate group stop with signr
and participate in it. If already set, participate in the existing
group stop. If participated in a group stop (and thus slept), true is
returned with siglock released.
If ptraced, this function doesn’t handle stop itself. Instead,
JOBCTL_TRAP_STOP is scheduled and false is returned with siglock
untouched. The caller must ensure that INTERRUPT trap handling takes
places afterwards.
Context
Must be called with current->sighand->siglock held, which is released
on true return.
Return
false if group stop is already cancelled or ptrace trap is scheduled.
true if participated in group stop.
-
void do_jobctl_trap(void)¶
take care of ptrace jobctl traps
Parameters
voidno arguments
Description
When PT_SEIZED, it’s used for both group stop and explicit
SEIZE/INTERRUPT traps. Both generate PTRACE_EVENT_STOP trap with
accompanying siginfo. If stopped, lower eight bits of exit_code contain
the stop signal; otherwise, SIGTRAP.
When !PT_SEIZED, it’s used only for group stop trap with stop signal number as exit_code and no siginfo.
Context
Must be called with current->sighand->siglock held, which may be released and re-acquired before returning with intervening sleep.
-
void do_freezer_trap(void)¶
handle the freezer jobctl trap
Parameters
voidno arguments
Description
Puts the task into frozen state, if only the task is not about to quit. In this case it drops JOBCTL_TRAP_FREEZE.
Context
Must be called with current->sighand->siglock held, which is always released before returning.
-
void signal_delivered(struct ksignal *ksig, int stepping)¶
called after signal delivery to update blocked signals
Parameters
struct ksignal *ksigkernel signal struct
int steppingnonzero if debugger single-step or block-step in use
Description
This function should be called when a signal has successfully been
delivered. It updates the blocked signals accordingly (ksig->ka.sa.sa_mask
is always blocked), and the signal itself is blocked unless SA_NODEFER
is set in ksig->ka.sa.sa_flags. Tracing is notified.
-
long sys_restart_syscall(void)¶
restart a system call
Parameters
voidno arguments
-
void set_current_blocked(sigset_t *newset)¶
change current->blocked mask
Parameters
sigset_t *newsetnew mask
Description
It is wrong to change ->blocked directly, this helper should be used to ensure the process can’t miss a shared signal we are going to block.
-
long sys_rt_sigprocmask(int how, sigset_t __user *nset, sigset_t __user *oset, size_t sigsetsize)¶
change the list of currently blocked signals
Parameters
int howwhether to add, remove, or set signals
sigset_t __user * nsetstores pending signals
sigset_t __user * osetprevious value of signal mask if non-null
size_t sigsetsizesize of sigset_t type
-
long sys_rt_sigpending(sigset_t __user *uset, size_t sigsetsize)¶
examine a pending signal that has been raised while blocked
Parameters
sigset_t __user * usetstores pending signals
size_t sigsetsizesize of sigset_t type or larger
-
void copy_siginfo_to_external32(struct compat_siginfo *to, const struct kernel_siginfo *from)¶
copy a kernel siginfo into a compat user siginfo
Parameters
struct compat_siginfo *tocompat siginfo destination
const struct kernel_siginfo *fromkernel siginfo source
Note
This function does not work properly for the SIGCHLD on x32, but fortunately it doesn’t have to. The only valid callers for this function are copy_siginfo_to_user32, which is overriden for x32 and the coredump code. The latter does not care because SIGCHLD will never cause a coredump.
-
int do_sigtimedwait(const sigset_t *which, kernel_siginfo_t *info, const struct timespec64 *ts)¶
wait for queued signals specified in which
Parameters
const sigset_t *whichqueued signals to wait for
kernel_siginfo_t *infoif non-null, the signal’s siginfo is returned here
const struct timespec64 *tsupper bound on process time suspension
-
long sys_rt_sigtimedwait(const sigset_t __user *uthese, siginfo_t __user *uinfo, const struct __kernel_timespec __user *uts, size_t sigsetsize)¶
synchronously wait for queued signals specified in uthese
Parameters
const sigset_t __user * uthesequeued signals to wait for
siginfo_t __user * uinfoif non-null, the signal’s siginfo is returned here
const struct __kernel_timespec __user * utsupper bound on process time suspension
size_t sigsetsizesize of sigset_t type
-
long sys_kill(pid_t pid, int sig)¶
send a signal to a process
Parameters
pid_t pidthe PID of the process
int sigsignal to be sent
-
long sys_pidfd_send_signal(int pidfd, int sig, siginfo_t __user *info, unsigned int flags)¶
Signal a process through a pidfd
Parameters
int pidfdfile descriptor of the process
int sigsignal to send
siginfo_t __user * infosignal info
unsigned int flagsfuture flags
Description
Send the signal to the thread group or to the individual thread depending on PIDFD_THREAD. In the future extension to flags may be used to override the default scope of pidfd.
Return
0 on success, negative errno on failure
-
long sys_tgkill(pid_t tgid, pid_t pid, int sig)¶
send signal to one specific thread
Parameters
pid_t tgidthe thread group ID of the thread
pid_t pidthe PID of the thread
int sigsignal to be sent
This syscall also checks the tgid and returns -ESRCH even if the PID exists but it’s not belonging to the target process anymore. This method solves the problem of threads exiting and PIDs getting reused.
-
long sys_tkill(pid_t pid, int sig)¶
send signal to one specific task
Parameters
pid_t pidthe PID of the task
int sigsignal to be sent
Send a signal to only one task, even if it’s a CLONE_THREAD task.
-
long sys_rt_sigqueueinfo(pid_t pid, int sig, siginfo_t __user *uinfo)¶
send signal information to a signal
Parameters
pid_t pidthe PID of the thread
int sigsignal to be sent
siginfo_t __user * uinfosignal info to be sent
-
long sys_sigpending(old_sigset_t __user *uset)¶
examine pending signals
Parameters
old_sigset_t __user * usetwhere mask of pending signal is returned
-
long sys_sigprocmask(int how, old_sigset_t __user *nset, old_sigset_t __user *oset)¶
examine and change blocked signals
Parameters
int howwhether to add, remove, or set signals
old_sigset_t __user * nsetsignals to add or remove (if non-null)
old_sigset_t __user * osetprevious value of signal mask if non-null
Description
Some platforms have their own version with special arguments; others support only sys_rt_sigprocmask.
-
long sys_rt_sigaction(int sig, const struct sigaction __user *act, struct sigaction __user *oact, size_t sigsetsize)¶
alter an action taken by a process
Parameters
int sigsignal to be sent
const struct sigaction __user * actnew sigaction
struct sigaction __user * oactused to save the previous sigaction
size_t sigsetsizesize of sigset_t type
-
long sys_rt_sigsuspend(sigset_t __user *unewset, size_t sigsetsize)¶
replace the signal mask for a value with the unewset value until a signal is received
Parameters
sigset_t __user * unewsetnew signal mask value
size_t sigsetsizesize of sigset_t type
-
kthread_create¶
kthread_create (threadfn, data, namefmt, arg...)
create a kthread on the current node
Parameters
threadfnthe function to run in the thread
datadata pointer for threadfn()
namefmtprintf-style format string for the thread name
arg...arguments for namefmt.
Description
This macro will create a kthread on the current node, leaving it in
the stopped state. This is just a helper for kthread_create_on_node();
see the documentation there for more details.
-
kthread_run¶
kthread_run (threadfn, data, namefmt, ...)
create and wake a thread.
Parameters
threadfnthe function to run until signal_pending(current).
datadata ptr for threadfn.
namefmtprintf-style name for the thread.
...variable arguments
Description
Convenient wrapper for kthread_create() followed by
wake_up_process(). Returns the kthread or ERR_PTR(-ENOMEM).
-
struct task_struct *kthread_run_on_cpu(int (*threadfn)(void *data), void *data, unsigned int cpu, const char *namefmt)¶
create and wake a cpu bound thread.
Parameters
int (*threadfn)(void *data)the function to run until signal_pending(current).
void *datadata ptr for threadfn.
unsigned int cpuThe cpu on which the thread should be bound,
const char *namefmtprintf-style name for the thread. Format is restricted to “name.*``u``”. Code fills in cpu number.
Description
Convenient wrapper for kthread_create_on_cpu()
followed by wake_up_process(). Returns the kthread or
ERR_PTR(-ENOMEM).
-
kthread_run_worker¶
kthread_run_worker (flags, namefmt, ...)
create and wake a kthread worker.
Parameters
flagsflags modifying the default behavior of the worker
namefmtprintf-style name for the thread.
...variable arguments
Description
Convenient wrapper for kthread_create_worker() followed by
wake_up_process(). Returns the kthread_worker or ERR_PTR(-ENOMEM).
-
struct kthread_worker *kthread_run_worker_on_cpu(int cpu, unsigned int flags, const char namefmt[])¶
create and wake a cpu bound kthread worker.
Parameters
int cpuCPU number
unsigned int flagsflags modifying the default behavior of the worker
const char namefmt[]printf-style name for the thread. Format is restricted to “name.*``u``”. Code fills in cpu number.
Description
Convenient wrapper for kthread_create_worker_on_cpu()
followed by wake_up_process(). Returns the kthread_worker or
ERR_PTR(-ENOMEM).
-
bool kthread_should_stop(void)¶
should this kthread return now?
Parameters
voidno arguments
Description
When someone calls kthread_stop() on your kthread, it will be woken
and this will return true. You should then return, and your return
value will be passed through to kthread_stop().
-
bool kthread_should_park(void)¶
should this kthread park now?
Parameters
voidno arguments
Description
When someone calls kthread_park() on your kthread, it will be woken
and this will return true. You should then do the necessary
cleanup and call kthread_parkme()
Similar to kthread_should_stop(), but this keeps the thread alive
and in a park position. kthread_unpark() “restarts” the thread and
calls the thread function again.
-
bool kthread_freezable_should_stop(bool *was_frozen)¶
should this freezable kthread return now?
Parameters
bool *was_frozenoptional out parameter, indicates whether
currentwas frozen
Description
kthread_should_stop() for freezable kthreads, which will enter
refrigerator if necessary. This function is safe from kthread_stop() /
freezer deadlock and freezable kthreads should use this function instead
of calling try_to_freeze() directly.
-
void *kthread_func(struct task_struct *task)¶
return the function specified on kthread creation
Parameters
struct task_struct *taskkthread task in question
Description
Returns NULL if the task is not a kthread.
-
void *kthread_data(struct task_struct *task)¶
return data value specified on kthread creation
Parameters
struct task_struct *taskkthread task in question
Description
Return the data value specified when kthread task was created. The caller is responsible for ensuring the validity of task when calling this function.
-
void __noreturn kthread_exit(long result)¶
Cause the current kthread return result to
kthread_stop().
Parameters
long resultThe integer value to return to
kthread_stop().
Description
While kthread_exit can be called directly, it exists so that functions which do some additional work in non-modular code such as module_put_and_kthread_exit can be implemented.
Does not return.
-
void __noreturn kthread_complete_and_exit(struct completion *comp, long code)¶
Exit the current kthread.
Parameters
struct completion *compCompletion to complete
long codeThe integer value to return to
kthread_stop().
Description
If present, complete comp and then return code to kthread_stop().
A kernel thread whose module may be removed after the completion of comp can use this function to exit safely.
Does not return.
-
struct task_struct *kthread_create_on_node(int (*threadfn)(void *data), void *data, int node, const char namefmt[], ...)¶
create a kthread.
Parameters
int (*threadfn)(void *data)the function to run until signal_pending(current).
void *datadata ptr for threadfn.
int nodetask and thread structures for the thread are allocated on this node
const char namefmt[]printf-style name for the thread.
...variable arguments
Description
This helper function creates and names a kernel
thread. The thread will be stopped: use wake_up_process() to start
it. See also kthread_run(). The new thread has SCHED_NORMAL policy and
is affine to all CPUs.
If thread is going to be bound on a particular cpu, give its node
in node, to get NUMA affinity for kthread stack, or else give NUMA_NO_NODE.
When woken, the thread will run threadfn() with data as its
argument. threadfn() can either return directly if it is a
standalone thread for which no one will call kthread_stop(), or
return when ‘kthread_should_stop()’ is true (which means
kthread_stop() has been called). The return value should be zero
or a negative error number; it will be passed to kthread_stop().
Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR).
-
void kthread_bind(struct task_struct *p, unsigned int cpu)¶
bind a just-created kthread to a cpu.
Parameters
struct task_struct *pthread created by
kthread_create().unsigned int cpucpu (might not be online, must be possible) for k to run on.
Description
This function is equivalent to set_cpus_allowed(),
except that cpu doesn’t need to be online, and the thread must be
stopped (i.e., just returned from kthread_create()).
-
struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), void *data, unsigned int cpu, const char *namefmt)¶
Create a cpu bound kthread
Parameters
int (*threadfn)(void *data)the function to run until signal_pending(current).
void *datadata ptr for threadfn.
unsigned int cpuThe cpu on which the thread should be bound,
const char *namefmtprintf-style name for the thread. Format is restricted to “name.*``u``”. Code fills in cpu number.
Description
This helper function creates and names a kernel thread
-
void kthread_unpark(struct task_struct *k)¶
unpark a thread created by
kthread_create().
Parameters
struct task_struct *kthread created by
kthread_create().
Description
Sets kthread_should_park() for k to return false, wakes it, and
waits for it to return. If the thread is marked percpu then its
bound to the cpu again.
-
int kthread_park(struct task_struct *k)¶
park a thread created by
kthread_create().
Parameters
struct task_struct *kthread created by
kthread_create().
Description
Sets kthread_should_park() for k to return true, wakes it, and
waits for it to return. This can also be called after kthread_create()
instead of calling wake_up_process(): the thread will park without
calling threadfn().
Returns 0 if the thread is parked, -ENOSYS if the thread exited. If called by the kthread itself just the park bit is set.
-
int kthread_stop(struct task_struct *k)¶
stop a thread created by
kthread_create().
Parameters
struct task_struct *kthread created by
kthread_create().
Description
Sets kthread_should_stop() for k to return true, wakes it, and
waits for it to exit. This can also be called after kthread_create()
instead of calling wake_up_process(): the thread will exit without
calling threadfn().
If threadfn() may call kthread_exit() itself, the caller must ensure
task_struct can’t go away.
Returns the result of threadfn(), or -EINTR if wake_up_process()
was never called.
-
int kthread_stop_put(struct task_struct *k)¶
stop a thread and put its task struct
Parameters
struct task_struct *kthread created by
kthread_create().
Description
Stops a thread created by kthread_create() and put its task_struct.
Only use when holding an extra task struct reference obtained by
calling get_task_struct().
-
int kthread_worker_fn(void *worker_ptr)¶
kthread function to process kthread_worker
Parameters
void *worker_ptrpointer to initialized kthread_worker
Description
This function implements the main cycle of kthread worker. It processes
work_list until it is stopped with kthread_stop(). It sleeps when the queue
is empty.
The works are not allowed to keep any locks, disable preemption or interrupts when they finish. There is defined a safe point for freezing when one work finishes and before a new one is started.
Also the works must not be handled by more than one worker at the same time,
see also kthread_queue_work().
-
struct kthread_worker *kthread_create_worker_on_node(unsigned int flags, int node, const char namefmt[], ...)¶
create a kthread worker
Parameters
unsigned int flagsflags modifying the default behavior of the worker
int nodetask structure for the thread is allocated on this node
const char namefmt[]printf-style name for the kthread worker (task).
...variable arguments
Description
Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) when the needed structures could not get allocated, and ERR_PTR(-EINTR) when the caller was killed by a fatal signal.
-
struct kthread_worker *kthread_create_worker_on_cpu(int cpu, unsigned int flags, const char namefmt[])¶
create a kthread worker and bind it to a given CPU and the associated NUMA node.
Parameters
int cpuCPU number
unsigned int flagsflags modifying the default behavior of the worker
const char namefmt[]printf-style name for the thread. Format is restricted to “name.*``u``”. Code fills in cpu number.
Description
Use a valid CPU number if you want to bind the kthread worker to the given CPU and the associated NUMA node.
A good practice is to add the cpu number also into the worker name.
For example, use kthread_create_worker_on_cpu(cpu, “helper/d”, cpu).
CPU hotplug: The kthread worker API is simple and generic. It just provides a way to create, use, and destroy workers.
It is up to the API user how to handle CPU hotplug. They have to decide how to handle pending work items, prevent queuing new ones, and restore the functionality when the CPU goes off and on. There are a few catches:
CPU affinity gets lost when it is scheduled on an offline CPU.
The worker might not exist when the CPU was off when the user created the workers.
Good practice is to implement two CPU hotplug callbacks and to destroy/create the worker when the CPU goes down/up.
Return
The pointer to the allocated worker on success, ERR_PTR(-ENOMEM) when the needed structures could not get allocated, and ERR_PTR(-EINTR) when the caller was killed by a fatal signal.
-
bool kthread_queue_work(struct kthread_worker *worker, struct kthread_work *work)¶
queue a kthread_work
Parameters
struct kthread_worker *workertarget kthread_worker
struct kthread_work *workkthread_work to queue
Description
Queue work to work processor task for async execution. task
must have been created with kthread_create_worker(). Returns true
if work was successfully queued, false if it was already pending.
Reinitialize the work if it needs to be used by another worker. For example, when the worker was stopped and started again.
-
void kthread_delayed_work_timer_fn(struct timer_list *t)¶
callback that queues the associated kthread delayed work when the timer expires.
Parameters
struct timer_list *tpointer to the expired timer
Description
The format of the function is defined by struct timer_list. It should have been called from irqsafe timer with irq already off.
-
bool kthread_queue_delayed_work(struct kthread_worker *worker, struct kthread_delayed_work *dwork, unsigned long delay)¶
queue the associated kthread work after a delay.
Parameters
struct kthread_worker *workertarget kthread_worker
struct kthread_delayed_work *dworkkthread_delayed_work to queue
unsigned long delaynumber of jiffies to wait before queuing
Description
If the work has not been pending it starts a timer that will queue the work after the given delay. If delay is zero, it queues the work immediately.
Return
false if the work has already been pending. It means that
either the timer was running or the work was queued. It returns true
otherwise.
-
void kthread_flush_work(struct kthread_work *work)¶
flush a kthread_work
Parameters
struct kthread_work *workwork to flush
Description
If work is queued or executing, wait for it to finish execution.
-
bool kthread_mod_delayed_work(struct kthread_worker *worker, struct kthread_delayed_work *dwork, unsigned long delay)¶
modify delay of or queue a kthread delayed work
Parameters
struct kthread_worker *workerkthread worker to use
struct kthread_delayed_work *dworkkthread delayed work to queue
unsigned long delaynumber of jiffies to wait before queuing
Description
If dwork is idle, equivalent to kthread_queue_delayed_work(). Otherwise,
modify dwork’s timer so that it expires after delay. If delay is zero,
work is guaranteed to be queued immediately.
A special case is when the work is being canceled in parallel.
It might be caused either by the real kthread_cancel_delayed_work_sync()
or yet another kthread_mod_delayed_work() call. We let the other command
win and return true here. The return value can be used for reference
counting and the number of queued works stays the same. Anyway, the caller
is supposed to synchronize these operations a reasonable way.
This function is safe to call from any context including IRQ handler.
See __kthread_cancel_work() and kthread_delayed_work_timer_fn()
for details.
Return
false if dwork was idle and queued, true otherwise.
-
bool kthread_cancel_work_sync(struct kthread_work *work)¶
cancel a kthread work and wait for it to finish
Parameters
struct kthread_work *workthe kthread work to cancel
Description
Cancel work and wait for its execution to finish. This function can be used even if the work re-queues itself. On return from this function, work is guaranteed to be not pending or executing on any CPU.
kthread_cancel_work_sync(delayed_work->work) must not be used for
delayed_work’s. Use kthread_cancel_delayed_work_sync() instead.
The caller must ensure that the worker on which work was last queued can’t be destroyed before this function returns.
Return
true if work was pending, false otherwise.
-
bool kthread_cancel_delayed_work_sync(struct kthread_delayed_work *dwork)¶
cancel a kthread delayed work and wait for it to finish.
Parameters
struct kthread_delayed_work *dworkthe kthread delayed work to cancel
Description
This is kthread_cancel_work_sync() for delayed works.
Return
true if dwork was pending, false otherwise.
-
void kthread_flush_worker(struct kthread_worker *worker)¶
flush all current works on a kthread_worker
Parameters
struct kthread_worker *workerworker to flush
Description
Wait until all currently executing or pending works on worker are finished.
-
void kthread_destroy_worker(struct kthread_worker *worker)¶
destroy a kthread worker
Parameters
struct kthread_worker *workerworker to be destroyed
Description
Flush and destroy worker. The simple flush is enough because the kthread worker API is used only in trivial scenarios. There are no multi-step state machines needed.
Note that this function is not responsible for handling delayed work, so caller should be responsible for queuing or canceling all delayed work items before invoke this function.
-
void kthread_use_mm(struct mm_struct *mm)¶
make the calling kthread operate on an address space
Parameters
struct mm_struct *mmaddress space to operate on
-
void kthread_unuse_mm(struct mm_struct *mm)¶
reverse the effect of
kthread_use_mm()
Parameters
struct mm_struct *mmaddress space to operate on
-
void kthread_associate_blkcg(struct cgroup_subsys_state *css)¶
associate blkcg to current kthread
Parameters
struct cgroup_subsys_state *cssthe cgroup info
Description
Current thread must be a kthread. The thread is running jobs on behalf of other threads. In some cases, we expect the jobs attach cgroup info of original threads instead of that of current thread. This function stores original thread’s cgroup info in current kthread context for later retrieval.
Reference counting¶
-
void refcount_set(refcount_t *r, int n)¶
set a refcount’s value
Parameters
refcount_t *rthe refcount
int nvalue to which the refcount will be set
-
unsigned int refcount_read(const refcount_t *r)¶
get a refcount’s value
Parameters
const refcount_t *rthe refcount
Return
the refcount’s value
-
bool refcount_add_not_zero(int i, refcount_t *r)¶
add a value to a refcount unless it is 0
Parameters
int ithe value to add to the refcount
refcount_t *rthe refcount
Description
Will saturate at REFCOUNT_SATURATED and WARN.
Provides no memory ordering, it is assumed the caller has guaranteed the object memory to be stable (RCU, etc.). It does provide a control dependency and thereby orders future stores. See the comment on top.
Use of this function is not recommended for the normal reference counting
use case in which references are taken and released one at a time. In these
cases, refcount_inc(), or one of its variants, should instead be used to
increment a reference count.
Return
false if the passed refcount is 0, true otherwise
-
void refcount_add(int i, refcount_t *r)¶
add a value to a refcount
Parameters
int ithe value to add to the refcount
refcount_t *rthe refcount
Description
Similar to atomic_add(), but will saturate at REFCOUNT_SATURATED and WARN.
Provides no memory ordering, it is assumed the caller has guaranteed the object memory to be stable (RCU, etc.). It does provide a control dependency and thereby orders future stores. See the comment on top.
Use of this function is not recommended for the normal reference counting
use case in which references are taken and released one at a time. In these
cases, refcount_inc(), or one of its variants, should instead be used to
increment a reference count.
-
bool refcount_inc_not_zero(refcount_t *r)¶
increment a refcount unless it is 0
Parameters
refcount_t *rthe refcount to increment
Description
Similar to atomic_inc_not_zero(), but will saturate at REFCOUNT_SATURATED
and WARN.
Provides no memory ordering, it is assumed the caller has guaranteed the object memory to be stable (RCU, etc.). It does provide a control dependency and thereby orders future stores. See the comment on top.
Return
true if the increment was successful, false otherwise
-
void refcount_inc(refcount_t *r)¶
increment a refcount
Parameters
refcount_t *rthe refcount to increment
Description
Similar to atomic_inc(), but will saturate at REFCOUNT_SATURATED and WARN.
Provides no memory ordering, it is assumed the caller already has a reference on the object.
Will WARN if the refcount is 0, as this represents a possible use-after-free condition.
-
bool refcount_sub_and_test(int i, refcount_t *r)¶
subtract from a refcount and test if it is 0
Parameters
int iamount to subtract from the refcount
refcount_t *rthe refcount
Description
Similar to atomic_dec_and_test(), but it will WARN, return false and
ultimately leak on underflow and will fail to decrement when saturated
at REFCOUNT_SATURATED.
Provides release memory ordering, such that prior loads and stores are done before, and provides an acquire ordering on success such that free() must come after.
Use of this function is not recommended for the normal reference counting
use case in which references are taken and released one at a time. In these
cases, refcount_dec(), or one of its variants, should instead be used to
decrement a reference count.
Return
true if the resulting refcount is 0, false otherwise
-
bool refcount_dec_and_test(refcount_t *r)¶
decrement a refcount and test if it is 0
Parameters
refcount_t *rthe refcount
Description
Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
decrement when saturated at REFCOUNT_SATURATED.
Provides release memory ordering, such that prior loads and stores are done before, and provides an acquire ordering on success such that free() must come after.
Return
true if the resulting refcount is 0, false otherwise
-
void refcount_dec(refcount_t *r)¶
decrement a refcount
Parameters
refcount_t *rthe refcount
Description
Similar to atomic_dec(), it will WARN on underflow and fail to decrement
when saturated at REFCOUNT_SATURATED.
Provides release memory ordering, such that prior loads and stores are done before.
-
bool refcount_dec_if_one(refcount_t *r)¶
decrement a refcount if it is 1
Parameters
refcount_t *rthe refcount
Description
No atomic_t counterpart, it attempts a 1 -> 0 transition and returns the success thereof.
Like all decrement operations, it provides release memory order and provides a control dependency.
It can be used like a try-delete operator; this explicit case is provided and not cmpxchg in generic, because that would allow implementing unsafe operations.
Return
true if the resulting refcount is 0, false otherwise
-
bool refcount_dec_not_one(refcount_t *r)¶
decrement a refcount if it is not 1
Parameters
refcount_t *rthe refcount
Description
No atomic_t counterpart, it decrements unless the value is 1, in which case it will return false.
Was often done like: atomic_add_unless(var, -1, 1)
Return
true if the decrement operation was successful, false otherwise
-
bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)¶
return holding mutex if able to decrement refcount to 0
Parameters
refcount_t *rthe refcount
struct mutex *lockthe mutex to be locked
Description
Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail
to decrement when saturated at REFCOUNT_SATURATED.
Provides release memory ordering, such that prior loads and stores are done before, and provides a control dependency such that free() must come after. See the comment on top.
Return
- true and hold mutex if able to decrement refcount to 0, false
otherwise
-
bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock)¶
return holding spinlock if able to decrement refcount to 0
Parameters
refcount_t *rthe refcount
spinlock_t *lockthe spinlock to be locked
Description
Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to decrement when saturated at REFCOUNT_SATURATED.
Provides release memory ordering, such that prior loads and stores are done before, and provides a control dependency such that free() must come after. See the comment on top.
Return
- true and hold spinlock if able to decrement refcount to 0, false
otherwise
-
bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, unsigned long *flags)¶
return holding spinlock with disabled interrupts if able to decrement refcount to 0
Parameters
refcount_t *rthe refcount
spinlock_t *lockthe spinlock to be locked
unsigned long *flagssaved IRQ-flags if the is acquired
Description
Same as refcount_dec_and_lock() above except that the spinlock is acquired
with disabled interrupts.
Return
- true and hold spinlock if able to decrement refcount to 0, false
otherwise
Atomics¶
-
int atomic_read(const atomic_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic_t *vpointer to atomic_t
Description
Atomically loads the value of v with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_read() there.
Return
The value loaded from v.
-
int atomic_read_acquire(const atomic_t *v)¶
atomic load with acquire ordering
Parameters
const atomic_t *vpointer to atomic_t
Description
Atomically loads the value of v with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_read_acquire() there.
Return
The value loaded from v.
-
void atomic_set(atomic_t *v, int i)¶
atomic set with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int iint value to assign
Description
Atomically sets v to i with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_set() there.
Return
Nothing.
-
void atomic_set_release(atomic_t *v, int i)¶
atomic set with release ordering
Parameters
atomic_t *vpointer to atomic_t
int iint value to assign
Description
Atomically sets v to i with release ordering.
Unsafe to use in noinstr code; use raw_atomic_set_release() there.
Return
Nothing.
-
void atomic_add(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_add() there.
Return
Nothing.
-
int atomic_add_return(int i, atomic_t *v)¶
atomic add with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_add_return() there.
Return
The updated value of v.
-
int atomic_add_return_acquire(int i, atomic_t *v)¶
atomic add with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there.
Return
The updated value of v.
-
int atomic_add_return_release(int i, atomic_t *v)¶
atomic add with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_add_return_release() there.
Return
The updated value of v.
-
int atomic_add_return_relaxed(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there.
Return
The updated value of v.
-
int atomic_fetch_add(int i, atomic_t *v)¶
atomic add with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_add() there.
Return
The original value of v.
-
int atomic_fetch_add_acquire(int i, atomic_t *v)¶
atomic add with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there.
Return
The original value of v.
-
int atomic_fetch_add_release(int i, atomic_t *v)¶
atomic add with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there.
Return
The original value of v.
-
int atomic_fetch_add_relaxed(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there.
Return
The original value of v.
-
void atomic_sub(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_sub() there.
Return
Nothing.
-
int atomic_sub_return(int i, atomic_t *v)¶
atomic subtract with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_sub_return() there.
Return
The updated value of v.
-
int atomic_sub_return_acquire(int i, atomic_t *v)¶
atomic subtract with acquire ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there.
Return
The updated value of v.
-
int atomic_sub_return_release(int i, atomic_t *v)¶
atomic subtract with release ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there.
Return
The updated value of v.
-
int atomic_sub_return_relaxed(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there.
Return
The updated value of v.
-
int atomic_fetch_sub(int i, atomic_t *v)¶
atomic subtract with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there.
Return
The original value of v.
-
int atomic_fetch_sub_acquire(int i, atomic_t *v)¶
atomic subtract with acquire ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there.
Return
The original value of v.
-
int atomic_fetch_sub_release(int i, atomic_t *v)¶
atomic subtract with release ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there.
Return
The original value of v.
-
int atomic_fetch_sub_relaxed(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there.
Return
The original value of v.
-
void atomic_inc(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_inc() there.
Return
Nothing.
-
int atomic_inc_return(atomic_t *v)¶
atomic increment with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_inc_return() there.
Return
The updated value of v.
-
int atomic_inc_return_acquire(atomic_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
Return
The updated value of v.
-
int atomic_inc_return_release(atomic_t *v)¶
atomic increment with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there.
Return
The updated value of v.
-
int atomic_inc_return_relaxed(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there.
Return
The updated value of v.
-
int atomic_fetch_inc(atomic_t *v)¶
atomic increment with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there.
Return
The original value of v.
-
int atomic_fetch_inc_acquire(atomic_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there.
Return
The original value of v.
-
int atomic_fetch_inc_release(atomic_t *v)¶
atomic increment with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there.
Return
The original value of v.
-
int atomic_fetch_inc_relaxed(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there.
Return
The original value of v.
-
void atomic_dec(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_dec() there.
Return
Nothing.
-
int atomic_dec_return(atomic_t *v)¶
atomic decrement with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_dec_return() there.
Return
The updated value of v.
-
int atomic_dec_return_acquire(atomic_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there.
Return
The updated value of v.
-
int atomic_dec_return_release(atomic_t *v)¶
atomic decrement with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there.
Return
The updated value of v.
-
int atomic_dec_return_relaxed(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there.
Return
The updated value of v.
-
int atomic_fetch_dec(atomic_t *v)¶
atomic decrement with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there.
Return
The original value of v.
-
int atomic_fetch_dec_acquire(atomic_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there.
Return
The original value of v.
-
int atomic_fetch_dec_release(atomic_t *v)¶
atomic decrement with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there.
Return
The original value of v.
-
int atomic_fetch_dec_relaxed(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there.
Return
The original value of v.
-
void atomic_and(int i, atomic_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_and() there.
Return
Nothing.
-
int atomic_fetch_and(int i, atomic_t *v)¶
atomic bitwise AND with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_and() there.
Return
The original value of v.
-
int atomic_fetch_and_acquire(int i, atomic_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there.
Return
The original value of v.
-
int atomic_fetch_and_release(int i, atomic_t *v)¶
atomic bitwise AND with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there.
Return
The original value of v.
-
int atomic_fetch_and_relaxed(int i, atomic_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there.
Return
The original value of v.
-
void atomic_andnot(int i, atomic_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_andnot() there.
Return
Nothing.
-
int atomic_fetch_andnot(int i, atomic_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there.
Return
The original value of v.
-
int atomic_fetch_andnot_acquire(int i, atomic_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there.
Return
The original value of v.
-
int atomic_fetch_andnot_release(int i, atomic_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there.
Return
The original value of v.
-
int atomic_fetch_andnot_relaxed(int i, atomic_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there.
Return
The original value of v.
-
void atomic_or(int i, atomic_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_or() there.
Return
Nothing.
-
int atomic_fetch_or(int i, atomic_t *v)¶
atomic bitwise OR with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_or() there.
Return
The original value of v.
-
int atomic_fetch_or_acquire(int i, atomic_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there.
Return
The original value of v.
-
int atomic_fetch_or_release(int i, atomic_t *v)¶
atomic bitwise OR with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there.
Return
The original value of v.
-
int atomic_fetch_or_relaxed(int i, atomic_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there.
Return
The original value of v.
-
void atomic_xor(int i, atomic_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_xor() there.
Return
Nothing.
-
int atomic_fetch_xor(int i, atomic_t *v)¶
atomic bitwise XOR with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there.
Return
The original value of v.
-
int atomic_fetch_xor_acquire(int i, atomic_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there.
Return
The original value of v.
-
int atomic_fetch_xor_release(int i, atomic_t *v)¶
atomic bitwise XOR with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there.
Return
The original value of v.
-
int atomic_fetch_xor_relaxed(int i, atomic_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there.
Return
The original value of v.
-
int atomic_xchg(atomic_t *v, int new)¶
atomic exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with full ordering.
Unsafe to use in noinstr code; use raw_atomic_xchg() there.
Return
The original value of v.
-
int atomic_xchg_acquire(atomic_t *v, int new)¶
atomic exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there.
Return
The original value of v.
-
int atomic_xchg_release(atomic_t *v, int new)¶
atomic exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with release ordering.
Unsafe to use in noinstr code; use raw_atomic_xchg_release() there.
Return
The original value of v.
-
int atomic_xchg_relaxed(atomic_t *v, int new)¶
atomic exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there.
Return
The original value of v.
-
int atomic_cmpxchg(atomic_t *v, int old, int new)¶
atomic compare and exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there.
Return
The original value of v.
-
int atomic_cmpxchg_acquire(atomic_t *v, int old, int new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there.
Return
The original value of v.
-
int atomic_cmpxchg_release(atomic_t *v, int old, int new)¶
atomic compare and exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there.
Return
The original value of v.
-
int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there.
Return
The original value of v.
-
bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)¶
atomic compare and exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)¶
atomic compare and exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_sub_and_test(int i, atomic_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_dec_and_test(atomic_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_inc_and_test(atomic_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_add_negative(int i, atomic_t *v)¶
atomic add and test if negative with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_add_negative() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_add_negative_acquire(int i, atomic_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_add_negative_release(int i, atomic_t *v)¶
atomic add and test if negative with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_add_negative_relaxed(int i, atomic_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there.
Return
true if the resulting value of v is negative, false otherwise.
-
int atomic_fetch_add_unless(atomic_t *v, int a, int u)¶
atomic add unless value with full ordering
Parameters
atomic_t *vpointer to atomic_t
int aint value to add
int uint value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there.
Return
The original value of v.
-
bool atomic_add_unless(atomic_t *v, int a, int u)¶
atomic add unless value with full ordering
Parameters
atomic_t *vpointer to atomic_t
int aint value to add
int uint value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_add_unless() there.
Return
true if v was updated, false otherwise.
-
bool atomic_inc_not_zero(atomic_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there.
Return
true if v was updated, false otherwise.
-
bool atomic_inc_unless_negative(atomic_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there.
Return
true if v was updated, false otherwise.
-
bool atomic_dec_unless_positive(atomic_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there.
Return
true if v was updated, false otherwise.
-
int atomic_dec_if_positive(atomic_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there.
Return
The old value of (v - 1), regardless of whether v was updated.
-
s64 atomic64_read(const atomic64_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic64_t *vpointer to atomic64_t
Description
Atomically loads the value of v with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_read() there.
Return
The value loaded from v.
-
s64 atomic64_read_acquire(const atomic64_t *v)¶
atomic load with acquire ordering
Parameters
const atomic64_t *vpointer to atomic64_t
Description
Atomically loads the value of v with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there.
Return
The value loaded from v.
-
void atomic64_set(atomic64_t *v, s64 i)¶
atomic set with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 is64 value to assign
Description
Atomically sets v to i with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_set() there.
Return
Nothing.
-
void atomic64_set_release(atomic64_t *v, s64 i)¶
atomic set with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 is64 value to assign
Description
Atomically sets v to i with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_set_release() there.
Return
Nothing.
-
void atomic64_add(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_add() there.
Return
Nothing.
-
s64 atomic64_add_return(s64 i, atomic64_t *v)¶
atomic add with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_return() there.
Return
The updated value of v.
-
s64 atomic64_add_return_acquire(s64 i, atomic64_t *v)¶
atomic add with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there.
Return
The updated value of v.
-
s64 atomic64_add_return_release(s64 i, atomic64_t *v)¶
atomic add with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there.
Return
The updated value of v.
-
s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there.
Return
The updated value of v.
-
s64 atomic64_fetch_add(s64 i, atomic64_t *v)¶
atomic add with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there.
Return
The original value of v.
-
s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v)¶
atomic add with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_add_release(s64 i, atomic64_t *v)¶
atomic add with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there.
Return
The original value of v.
-
void atomic64_sub(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub() there.
Return
Nothing.
-
s64 atomic64_sub_return(s64 i, atomic64_t *v)¶
atomic subtract with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub_return() there.
Return
The updated value of v.
-
s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v)¶
atomic subtract with acquire ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there.
Return
The updated value of v.
-
s64 atomic64_sub_return_release(s64 i, atomic64_t *v)¶
atomic subtract with release ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there.
Return
The updated value of v.
-
s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there.
Return
The updated value of v.
-
s64 atomic64_fetch_sub(s64 i, atomic64_t *v)¶
atomic subtract with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there.
Return
The original value of v.
-
s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)¶
atomic subtract with acquire ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v)¶
atomic subtract with release ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there.
Return
The original value of v.
-
void atomic64_inc(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc() there.
Return
Nothing.
-
s64 atomic64_inc_return(atomic64_t *v)¶
atomic increment with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc_return() there.
Return
The updated value of v.
-
s64 atomic64_inc_return_acquire(atomic64_t *v)¶
atomic increment with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there.
Return
The updated value of v.
-
s64 atomic64_inc_return_release(atomic64_t *v)¶
atomic increment with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there.
Return
The updated value of v.
-
s64 atomic64_inc_return_relaxed(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there.
Return
The updated value of v.
-
s64 atomic64_fetch_inc(atomic64_t *v)¶
atomic increment with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there.
Return
The original value of v.
-
s64 atomic64_fetch_inc_acquire(atomic64_t *v)¶
atomic increment with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_inc_release(atomic64_t *v)¶
atomic increment with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_inc_relaxed(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there.
Return
The original value of v.
-
void atomic64_dec(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec() there.
Return
Nothing.
-
s64 atomic64_dec_return(atomic64_t *v)¶
atomic decrement with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec_return() there.
Return
The updated value of v.
-
s64 atomic64_dec_return_acquire(atomic64_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there.
Return
The updated value of v.
-
s64 atomic64_dec_return_release(atomic64_t *v)¶
atomic decrement with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there.
Return
The updated value of v.
-
s64 atomic64_dec_return_relaxed(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there.
Return
The updated value of v.
-
s64 atomic64_fetch_dec(atomic64_t *v)¶
atomic decrement with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there.
Return
The original value of v.
-
s64 atomic64_fetch_dec_acquire(atomic64_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_dec_release(atomic64_t *v)¶
atomic decrement with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_dec_relaxed(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there.
Return
The original value of v.
-
void atomic64_and(s64 i, atomic64_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_and() there.
Return
Nothing.
-
s64 atomic64_fetch_and(s64 i, atomic64_t *v)¶
atomic bitwise AND with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there.
Return
The original value of v.
-
s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_and_release(s64 i, atomic64_t *v)¶
atomic bitwise AND with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there.
Return
The original value of v.
-
void atomic64_andnot(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_andnot() there.
Return
Nothing.
-
s64 atomic64_fetch_andnot(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there.
Return
The original value of v.
-
s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there.
Return
The original value of v.
-
void atomic64_or(s64 i, atomic64_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_or() there.
Return
Nothing.
-
s64 atomic64_fetch_or(s64 i, atomic64_t *v)¶
atomic bitwise OR with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there.
Return
The original value of v.
-
s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_or_release(s64 i, atomic64_t *v)¶
atomic bitwise OR with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there.
Return
The original value of v.
-
void atomic64_xor(s64 i, atomic64_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_xor() there.
Return
Nothing.
-
s64 atomic64_fetch_xor(s64 i, atomic64_t *v)¶
atomic bitwise XOR with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there.
Return
The original value of v.
-
s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there.
Return
The original value of v.
-
s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v)¶
atomic bitwise XOR with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there.
Return
The original value of v.
-
s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there.
Return
The original value of v.
-
s64 atomic64_xchg(atomic64_t *v, s64 new)¶
atomic exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_xchg() there.
Return
The original value of v.
-
s64 atomic64_xchg_acquire(atomic64_t *v, s64 new)¶
atomic exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there.
Return
The original value of v.
-
s64 atomic64_xchg_release(atomic64_t *v, s64 new)¶
atomic exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there.
Return
The original value of v.
-
s64 atomic64_xchg_relaxed(atomic64_t *v, s64 new)¶
atomic exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there.
Return
The original value of v.
-
s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there.
Return
The original value of v.
-
s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there.
Return
The original value of v.
-
s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there.
Return
The original value of v.
-
s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there.
Return
The original value of v.
-
bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic64_sub_and_test(s64 i, atomic64_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic64_dec_and_test(atomic64_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic64_inc_and_test(atomic64_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic64_add_negative(s64 i, atomic64_t *v)¶
atomic add and test if negative with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_negative() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic64_add_negative_acquire(s64 i, atomic64_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic64_add_negative_release(s64 i, atomic64_t *v)¶
atomic add and test if negative with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic64_add_negative_relaxed(s64 i, atomic64_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there.
Return
true if the resulting value of v is negative, false otherwise.
-
s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)¶
atomic add unless value with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 as64 value to add
s64 us64 value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there.
Return
The original value of v.
-
bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u)¶
atomic add unless value with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 as64 value to add
s64 us64 value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_add_unless() there.
Return
true if v was updated, false otherwise.
-
bool atomic64_inc_not_zero(atomic64_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there.
Return
true if v was updated, false otherwise.
-
bool atomic64_inc_unless_negative(atomic64_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there.
Return
true if v was updated, false otherwise.
-
bool atomic64_dec_unless_positive(atomic64_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there.
Return
true if v was updated, false otherwise.
-
s64 atomic64_dec_if_positive(atomic64_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there.
Return
The old value of (v - 1), regardless of whether v was updated.
-
long atomic_long_read(const atomic_long_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic_long_t *vpointer to atomic_long_t
Description
Atomically loads the value of v with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_read() there.
Return
The value loaded from v.
-
long atomic_long_read_acquire(const atomic_long_t *v)¶
atomic load with acquire ordering
Parameters
const atomic_long_t *vpointer to atomic_long_t
Description
Atomically loads the value of v with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there.
Return
The value loaded from v.
-
void atomic_long_set(atomic_long_t *v, long i)¶
atomic set with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long ilong value to assign
Description
Atomically sets v to i with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_set() there.
Return
Nothing.
-
void atomic_long_set_release(atomic_long_t *v, long i)¶
atomic set with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long ilong value to assign
Description
Atomically sets v to i with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_set_release() there.
Return
Nothing.
-
void atomic_long_add(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add() there.
Return
Nothing.
-
long atomic_long_add_return(long i, atomic_long_t *v)¶
atomic add with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_return() there.
Return
The updated value of v.
-
long atomic_long_add_return_acquire(long i, atomic_long_t *v)¶
atomic add with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there.
Return
The updated value of v.
-
long atomic_long_add_return_release(long i, atomic_long_t *v)¶
atomic add with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there.
Return
The updated value of v.
-
long atomic_long_add_return_relaxed(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there.
Return
The updated value of v.
-
long atomic_long_fetch_add(long i, atomic_long_t *v)¶
atomic add with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there.
Return
The original value of v.
-
long atomic_long_fetch_add_acquire(long i, atomic_long_t *v)¶
atomic add with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_add_release(long i, atomic_long_t *v)¶
atomic add with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there.
Return
The original value of v.
-
long atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there.
Return
The original value of v.
-
void atomic_long_sub(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub() there.
Return
Nothing.
-
long atomic_long_sub_return(long i, atomic_long_t *v)¶
atomic subtract with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there.
Return
The updated value of v.
-
long atomic_long_sub_return_acquire(long i, atomic_long_t *v)¶
atomic subtract with acquire ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there.
Return
The updated value of v.
-
long atomic_long_sub_return_release(long i, atomic_long_t *v)¶
atomic subtract with release ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there.
Return
The updated value of v.
-
long atomic_long_sub_return_relaxed(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there.
Return
The updated value of v.
-
long atomic_long_fetch_sub(long i, atomic_long_t *v)¶
atomic subtract with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there.
Return
The original value of v.
-
long atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)¶
atomic subtract with acquire ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_sub_release(long i, atomic_long_t *v)¶
atomic subtract with release ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there.
Return
The original value of v.
-
long atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there.
Return
The original value of v.
-
void atomic_long_inc(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc() there.
Return
Nothing.
-
long atomic_long_inc_return(atomic_long_t *v)¶
atomic increment with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there.
Return
The updated value of v.
-
long atomic_long_inc_return_acquire(atomic_long_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there.
Return
The updated value of v.
-
long atomic_long_inc_return_release(atomic_long_t *v)¶
atomic increment with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there.
Return
The updated value of v.
-
long atomic_long_inc_return_relaxed(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there.
Return
The updated value of v.
-
long atomic_long_fetch_inc(atomic_long_t *v)¶
atomic increment with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there.
Return
The original value of v.
-
long atomic_long_fetch_inc_acquire(atomic_long_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_inc_release(atomic_long_t *v)¶
atomic increment with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there.
Return
The original value of v.
-
long atomic_long_fetch_inc_relaxed(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there.
Return
The original value of v.
-
void atomic_long_dec(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec() there.
Return
Nothing.
-
long atomic_long_dec_return(atomic_long_t *v)¶
atomic decrement with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there.
Return
The updated value of v.
-
long atomic_long_dec_return_acquire(atomic_long_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there.
Return
The updated value of v.
-
long atomic_long_dec_return_release(atomic_long_t *v)¶
atomic decrement with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there.
Return
The updated value of v.
-
long atomic_long_dec_return_relaxed(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there.
Return
The updated value of v.
-
long atomic_long_fetch_dec(atomic_long_t *v)¶
atomic decrement with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there.
Return
The original value of v.
-
long atomic_long_fetch_dec_acquire(atomic_long_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_dec_release(atomic_long_t *v)¶
atomic decrement with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there.
Return
The original value of v.
-
long atomic_long_fetch_dec_relaxed(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there.
Return
The original value of v.
-
void atomic_long_and(long i, atomic_long_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_and() there.
Return
Nothing.
-
long atomic_long_fetch_and(long i, atomic_long_t *v)¶
atomic bitwise AND with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there.
Return
The original value of v.
-
long atomic_long_fetch_and_acquire(long i, atomic_long_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_and_release(long i, atomic_long_t *v)¶
atomic bitwise AND with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there.
Return
The original value of v.
-
long atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there.
Return
The original value of v.
-
void atomic_long_andnot(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_andnot() there.
Return
Nothing.
-
long atomic_long_fetch_andnot(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there.
Return
The original value of v.
-
long atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_andnot_release(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there.
Return
The original value of v.
-
long atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there.
Return
The original value of v.
-
void atomic_long_or(long i, atomic_long_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_or() there.
Return
Nothing.
-
long atomic_long_fetch_or(long i, atomic_long_t *v)¶
atomic bitwise OR with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there.
Return
The original value of v.
-
long atomic_long_fetch_or_acquire(long i, atomic_long_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_or_release(long i, atomic_long_t *v)¶
atomic bitwise OR with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there.
Return
The original value of v.
-
long atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there.
Return
The original value of v.
-
void atomic_long_xor(long i, atomic_long_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_xor() there.
Return
Nothing.
-
long atomic_long_fetch_xor(long i, atomic_long_t *v)¶
atomic bitwise XOR with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there.
Return
The original value of v.
-
long atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there.
Return
The original value of v.
-
long atomic_long_fetch_xor_release(long i, atomic_long_t *v)¶
atomic bitwise XOR with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there.
Return
The original value of v.
-
long atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there.
Return
The original value of v.
-
long atomic_long_xchg(atomic_long_t *v, long new)¶
atomic exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_xchg() there.
Return
The original value of v.
-
long atomic_long_xchg_acquire(atomic_long_t *v, long new)¶
atomic exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there.
Return
The original value of v.
-
long atomic_long_xchg_release(atomic_long_t *v, long new)¶
atomic exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there.
Return
The original value of v.
-
long atomic_long_xchg_relaxed(atomic_long_t *v, long new)¶
atomic exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there.
Return
The original value of v.
-
long atomic_long_cmpxchg(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there.
Return
The original value of v.
-
long atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there.
Return
The original value of v.
-
long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there.
Return
The original value of v.
-
long atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there.
Return
The original value of v.
-
bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
Return
true if the exchange occured, false otherwise.
-
bool atomic_long_sub_and_test(long i, atomic_long_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_long_dec_and_test(atomic_long_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_long_inc_and_test(atomic_long_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there.
Return
true if the resulting value of v is zero, false otherwise.
-
bool atomic_long_add_negative(long i, atomic_long_t *v)¶
atomic add and test if negative with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_long_add_negative_acquire(long i, atomic_long_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_long_add_negative_release(long i, atomic_long_t *v)¶
atomic add and test if negative with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there.
Return
true if the resulting value of v is negative, false otherwise.
-
bool atomic_long_add_negative_relaxed(long i, atomic_long_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there.
Return
true if the resulting value of v is negative, false otherwise.
-
long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)¶
atomic add unless value with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long along value to add
long ulong value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there.
Return
The original value of v.
-
bool atomic_long_add_unless(atomic_long_t *v, long a, long u)¶
atomic add unless value with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long along value to add
long ulong value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there.
Return
true if v was updated, false otherwise.
-
bool atomic_long_inc_not_zero(atomic_long_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there.
Return
true if v was updated, false otherwise.
-
bool atomic_long_inc_unless_negative(atomic_long_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there.
Return
true if v was updated, false otherwise.
-
bool atomic_long_dec_unless_positive(atomic_long_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there.
Return
true if v was updated, false otherwise.
-
long atomic_long_dec_if_positive(atomic_long_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there.
Return
The old value of (v - 1), regardless of whether v was updated.
-
int raw_atomic_read(const atomic_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic_t *vpointer to atomic_t
Description
Atomically loads the value of v with relaxed ordering.
Safe to use in noinstr code; prefer atomic_read() elsewhere.
Return
The value loaded from v.
-
int raw_atomic_read_acquire(const atomic_t *v)¶
atomic load with acquire ordering
Parameters
const atomic_t *vpointer to atomic_t
Description
Atomically loads the value of v with acquire ordering.
Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere.
Return
The value loaded from v.
-
void raw_atomic_set(atomic_t *v, int i)¶
atomic set with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int iint value to assign
Description
Atomically sets v to i with relaxed ordering.
Safe to use in noinstr code; prefer atomic_set() elsewhere.
Return
Nothing.
-
void raw_atomic_set_release(atomic_t *v, int i)¶
atomic set with release ordering
Parameters
atomic_t *vpointer to atomic_t
int iint value to assign
Description
Atomically sets v to i with release ordering.
Safe to use in noinstr code; prefer atomic_set_release() elsewhere.
Return
Nothing.
-
void raw_atomic_add(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_add() elsewhere.
Return
Nothing.
-
int raw_atomic_add_return(int i, atomic_t *v)¶
atomic add with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_add_return() elsewhere.
Return
The updated value of v.
-
int raw_atomic_add_return_acquire(int i, atomic_t *v)¶
atomic add with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere.
Return
The updated value of v.
-
int raw_atomic_add_return_release(int i, atomic_t *v)¶
atomic add with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere.
Return
The updated value of v.
-
int raw_atomic_add_return_relaxed(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere.
Return
The updated value of v.
-
int raw_atomic_fetch_add(int i, atomic_t *v)¶
atomic add with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_add_acquire(int i, atomic_t *v)¶
atomic add with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_add_release(int i, atomic_t *v)¶
atomic add with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_add_relaxed(int i, atomic_t *v)¶
atomic add with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_sub(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_sub() elsewhere.
Return
Nothing.
-
int raw_atomic_sub_return(int i, atomic_t *v)¶
atomic subtract with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_sub_return() elsewhere.
Return
The updated value of v.
-
int raw_atomic_sub_return_acquire(int i, atomic_t *v)¶
atomic subtract with acquire ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere.
Return
The updated value of v.
-
int raw_atomic_sub_return_release(int i, atomic_t *v)¶
atomic subtract with release ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere.
Return
The updated value of v.
-
int raw_atomic_sub_return_relaxed(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere.
Return
The updated value of v.
-
int raw_atomic_fetch_sub(int i, atomic_t *v)¶
atomic subtract with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_sub_acquire(int i, atomic_t *v)¶
atomic subtract with acquire ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_sub_release(int i, atomic_t *v)¶
atomic subtract with release ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)¶
atomic subtract with relaxed ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_inc(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_inc() elsewhere.
Return
Nothing.
-
int raw_atomic_inc_return(atomic_t *v)¶
atomic increment with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_inc_return() elsewhere.
Return
The updated value of v.
-
int raw_atomic_inc_return_acquire(atomic_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
Return
The updated value of v.
-
int raw_atomic_inc_return_release(atomic_t *v)¶
atomic increment with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere.
Return
The updated value of v.
-
int raw_atomic_inc_return_relaxed(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere.
Return
The updated value of v.
-
int raw_atomic_fetch_inc(atomic_t *v)¶
atomic increment with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_inc_acquire(atomic_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_inc_release(atomic_t *v)¶
atomic increment with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_inc_relaxed(atomic_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_dec(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_dec() elsewhere.
Return
Nothing.
-
int raw_atomic_dec_return(atomic_t *v)¶
atomic decrement with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_dec_return() elsewhere.
Return
The updated value of v.
-
int raw_atomic_dec_return_acquire(atomic_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere.
Return
The updated value of v.
-
int raw_atomic_dec_return_release(atomic_t *v)¶
atomic decrement with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere.
Return
The updated value of v.
-
int raw_atomic_dec_return_relaxed(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere.
Return
The updated value of v.
-
int raw_atomic_fetch_dec(atomic_t *v)¶
atomic decrement with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_dec_acquire(atomic_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_dec_release(atomic_t *v)¶
atomic decrement with release ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_dec_relaxed(atomic_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_and(int i, atomic_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_and() elsewhere.
Return
Nothing.
-
int raw_atomic_fetch_and(int i, atomic_t *v)¶
atomic bitwise AND with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_and_acquire(int i, atomic_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_and_release(int i, atomic_t *v)¶
atomic bitwise AND with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_and_relaxed(int i, atomic_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_andnot(int i, atomic_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_andnot() elsewhere.
Return
Nothing.
-
int raw_atomic_fetch_andnot(int i, atomic_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_andnot_release(int i, atomic_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_or(int i, atomic_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_or() elsewhere.
Return
Nothing.
-
int raw_atomic_fetch_or(int i, atomic_t *v)¶
atomic bitwise OR with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_or_acquire(int i, atomic_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_or_release(int i, atomic_t *v)¶
atomic bitwise OR with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_or_relaxed(int i, atomic_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_xor(int i, atomic_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_xor() elsewhere.
Return
Nothing.
-
int raw_atomic_fetch_xor(int i, atomic_t *v)¶
atomic bitwise XOR with full ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with full ordering.
Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_xor_acquire(int i, atomic_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_xor_release(int i, atomic_t *v)¶
atomic bitwise XOR with release ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with release ordering.
Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
int iint value
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere.
Return
The original value of v.
-
int raw_atomic_xchg(atomic_t *v, int new)¶
atomic exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with full ordering.
Safe to use in noinstr code; prefer atomic_xchg() elsewhere.
Return
The original value of v.
-
int raw_atomic_xchg_acquire(atomic_t *v, int new)¶
atomic exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with acquire ordering.
Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_xchg_release(atomic_t *v, int new)¶
atomic exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with release ordering.
Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_xchg_relaxed(atomic_t *v, int new)¶
atomic exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int newint value to assign
Description
Atomically updates v to new with relaxed ordering.
Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere.
Return
The original value of v.
-
int raw_atomic_cmpxchg(atomic_t *v, int old, int new)¶
atomic compare and exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere.
Return
The original value of v.
-
int raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere.
Return
The original value of v.
-
int raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)¶
atomic compare and exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere.
Return
The original value of v.
-
int raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int oldint value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere.
Return
The original value of v.
-
bool raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)¶
atomic compare and exchange with full ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)¶
atomic compare and exchange with release ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_t *vpointer to atomic_t
int *oldpointer to int value to compare with
int newint value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_sub_and_test(int i, atomic_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
int iint value to subtract
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_dec_and_test(atomic_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_inc_and_test(atomic_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_add_negative(int i, atomic_t *v)¶
atomic add and test if negative with full ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_add_negative() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_add_negative_acquire(int i, atomic_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_add_negative_release(int i, atomic_t *v)¶
atomic add and test if negative with release ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_add_negative_relaxed(int i, atomic_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
int iint value to add
atomic_t *vpointer to atomic_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
int raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)¶
atomic add unless value with full ordering
Parameters
atomic_t *vpointer to atomic_t
int aint value to add
int uint value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere.
Return
The original value of v.
-
bool raw_atomic_add_unless(atomic_t *v, int a, int u)¶
atomic add unless value with full ordering
Parameters
atomic_t *vpointer to atomic_t
int aint value to add
int uint value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_add_unless() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_inc_not_zero(atomic_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_inc_unless_negative(atomic_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_dec_unless_positive(atomic_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere.
Return
true if v was updated, false otherwise.
-
int raw_atomic_dec_if_positive(atomic_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic_t *vpointer to atomic_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere.
Return
The old value of (v - 1), regardless of whether v was updated.
-
s64 raw_atomic64_read(const atomic64_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic64_t *vpointer to atomic64_t
Description
Atomically loads the value of v with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_read() elsewhere.
Return
The value loaded from v.
-
s64 raw_atomic64_read_acquire(const atomic64_t *v)¶
atomic load with acquire ordering
Parameters
const atomic64_t *vpointer to atomic64_t
Description
Atomically loads the value of v with acquire ordering.
Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere.
Return
The value loaded from v.
-
void raw_atomic64_set(atomic64_t *v, s64 i)¶
atomic set with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 is64 value to assign
Description
Atomically sets v to i with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_set() elsewhere.
Return
Nothing.
-
void raw_atomic64_set_release(atomic64_t *v, s64 i)¶
atomic set with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 is64 value to assign
Description
Atomically sets v to i with release ordering.
Safe to use in noinstr code; prefer atomic64_set_release() elsewhere.
Return
Nothing.
-
void raw_atomic64_add(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_add() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_add_return(s64 i, atomic64_t *v)¶
atomic add with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic64_add_return() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)¶
atomic add with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_add_return_release(s64 i, atomic64_t *v)¶
atomic add with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_fetch_add(s64 i, atomic64_t *v)¶
atomic add with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)¶
atomic add with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)¶
atomic add with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)¶
atomic add with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_sub(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_sub() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_sub_return(s64 i, atomic64_t *v)¶
atomic subtract with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)¶
atomic subtract with acquire ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_sub_return_release(s64 i, atomic64_t *v)¶
atomic subtract with release ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_fetch_sub(s64 i, atomic64_t *v)¶
atomic subtract with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)¶
atomic subtract with acquire ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)¶
atomic subtract with release ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)¶
atomic subtract with relaxed ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_inc(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_inc() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_inc_return(atomic64_t *v)¶
atomic increment with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_inc_return_acquire(atomic64_t *v)¶
atomic increment with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_inc_return_release(atomic64_t *v)¶
atomic increment with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_inc_return_relaxed(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_fetch_inc(atomic64_t *v)¶
atomic increment with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_inc_acquire(atomic64_t *v)¶
atomic increment with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_inc_release(atomic64_t *v)¶
atomic increment with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_inc_relaxed(atomic64_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_dec(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_dec() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_dec_return(atomic64_t *v)¶
atomic decrement with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_dec_return_acquire(atomic64_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_dec_return_release(atomic64_t *v)¶
atomic decrement with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_dec_return_relaxed(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere.
Return
The updated value of v.
-
s64 raw_atomic64_fetch_dec(atomic64_t *v)¶
atomic decrement with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_dec_acquire(atomic64_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_dec_release(atomic64_t *v)¶
atomic decrement with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_dec_relaxed(atomic64_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_and(s64 i, atomic64_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_and() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_fetch_and(s64 i, atomic64_t *v)¶
atomic bitwise AND with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)¶
atomic bitwise AND with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_andnot(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_andnot() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_or(s64 i, atomic64_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_or() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_fetch_or(s64 i, atomic64_t *v)¶
atomic bitwise OR with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)¶
atomic bitwise OR with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic64_xor(s64 i, atomic64_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_xor() elsewhere.
Return
Nothing.
-
s64 raw_atomic64_fetch_xor(s64 i, atomic64_t *v)¶
atomic bitwise XOR with full ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with full ordering.
Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)¶
atomic bitwise XOR with release ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with release ordering.
Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
s64 is64 value
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_xchg(atomic64_t *v, s64 new)¶
atomic exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with full ordering.
Safe to use in noinstr code; prefer atomic64_xchg() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)¶
atomic exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with acquire ordering.
Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_xchg_release(atomic64_t *v, s64 new)¶
atomic exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with release ordering.
Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)¶
atomic exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 news64 value to assign
Description
Atomically updates v to new with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere.
Return
The original value of v.
-
s64 raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 olds64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere.
Return
The original value of v.
-
bool raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with release ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 *oldpointer to s64 value to compare with
s64 news64 value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic64_sub_and_test(s64 i, atomic64_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
s64 is64 value to subtract
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic64_dec_and_test(atomic64_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic64_inc_and_test(atomic64_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic64_add_negative(s64 i, atomic64_t *v)¶
atomic add and test if negative with full ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic64_add_negative_release(s64 i, atomic64_t *v)¶
atomic add and test if negative with release ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
s64 is64 value to add
atomic64_t *vpointer to atomic64_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
s64 raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)¶
atomic add unless value with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 as64 value to add
s64 us64 value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere.
Return
The original value of v.
-
bool raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)¶
atomic add unless value with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
s64 as64 value to add
s64 us64 value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic64_inc_not_zero(atomic64_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic64_inc_unless_negative(atomic64_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic64_dec_unless_positive(atomic64_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere.
Return
true if v was updated, false otherwise.
-
s64 raw_atomic64_dec_if_positive(atomic64_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic64_t *vpointer to atomic64_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere.
Return
The old value of (v - 1), regardless of whether v was updated.
-
long raw_atomic_long_read(const atomic_long_t *v)¶
atomic load with relaxed ordering
Parameters
const atomic_long_t *vpointer to atomic_long_t
Description
Atomically loads the value of v with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_read() elsewhere.
Return
The value loaded from v.
-
long raw_atomic_long_read_acquire(const atomic_long_t *v)¶
atomic load with acquire ordering
Parameters
const atomic_long_t *vpointer to atomic_long_t
Description
Atomically loads the value of v with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere.
Return
The value loaded from v.
-
void raw_atomic_long_set(atomic_long_t *v, long i)¶
atomic set with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long ilong value to assign
Description
Atomically sets v to i with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_set() elsewhere.
Return
Nothing.
-
void raw_atomic_long_set_release(atomic_long_t *v, long i)¶
atomic set with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long ilong value to assign
Description
Atomically sets v to i with release ordering.
Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere.
Return
Nothing.
-
void raw_atomic_long_add(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_add() elsewhere.
Return
Nothing.
-
long raw_atomic_long_add_return(long i, atomic_long_t *v)¶
atomic add with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)¶
atomic add with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_add_return_release(long i, atomic_long_t *v)¶
atomic add with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_fetch_add(long i, atomic_long_t *v)¶
atomic add with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)¶
atomic add with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)¶
atomic add with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)¶
atomic add with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_sub(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_sub() elsewhere.
Return
Nothing.
-
long raw_atomic_long_sub_return(long i, atomic_long_t *v)¶
atomic subtract with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)¶
atomic subtract with acquire ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_sub_return_release(long i, atomic_long_t *v)¶
atomic subtract with release ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_fetch_sub(long i, atomic_long_t *v)¶
atomic subtract with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)¶
atomic subtract with acquire ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)¶
atomic subtract with release ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)¶
atomic subtract with relaxed ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_inc(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_inc() elsewhere.
Return
Nothing.
-
long raw_atomic_long_inc_return(atomic_long_t *v)¶
atomic increment with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_inc_return_acquire(atomic_long_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_inc_return_release(atomic_long_t *v)¶
atomic increment with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_inc_return_relaxed(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_fetch_inc(atomic_long_t *v)¶
atomic increment with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)¶
atomic increment with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_inc_release(atomic_long_t *v)¶
atomic increment with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)¶
atomic increment with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_dec(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_dec() elsewhere.
Return
Nothing.
-
long raw_atomic_long_dec_return(atomic_long_t *v)¶
atomic decrement with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_dec_return_acquire(atomic_long_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_dec_return_release(atomic_long_t *v)¶
atomic decrement with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_dec_return_relaxed(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere.
Return
The updated value of v.
-
long raw_atomic_long_fetch_dec(atomic_long_t *v)¶
atomic decrement with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)¶
atomic decrement with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_dec_release(atomic_long_t *v)¶
atomic decrement with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)¶
atomic decrement with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_and(long i, atomic_long_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_and() elsewhere.
Return
Nothing.
-
long raw_atomic_long_fetch_and(long i, atomic_long_t *v)¶
atomic bitwise AND with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)¶
atomic bitwise AND with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)¶
atomic bitwise AND with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)¶
atomic bitwise AND with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_andnot(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere.
Return
Nothing.
-
long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)¶
atomic bitwise AND NOT with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v & ~i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_or(long i, atomic_long_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_or() elsewhere.
Return
Nothing.
-
long raw_atomic_long_fetch_or(long i, atomic_long_t *v)¶
atomic bitwise OR with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)¶
atomic bitwise OR with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)¶
atomic bitwise OR with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)¶
atomic bitwise OR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v | i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere.
Return
The original value of v.
-
void raw_atomic_long_xor(long i, atomic_long_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_xor() elsewhere.
Return
Nothing.
-
long raw_atomic_long_fetch_xor(long i, atomic_long_t *v)¶
atomic bitwise XOR with full ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)¶
atomic bitwise XOR with acquire ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)¶
atomic bitwise XOR with release ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)¶
atomic bitwise XOR with relaxed ordering
Parameters
long ilong value
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v ^ i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_xchg(atomic_long_t *v, long new)¶
atomic exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with full ordering.
Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)¶
atomic exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_xchg_release(atomic_long_t *v, long new)¶
atomic exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with release ordering.
Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)¶
atomic exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long newlong value to assign
Description
Atomically updates v to new with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere.
Return
The original value of v.
-
long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long oldlong value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere.
Return
The original value of v.
-
bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with full ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with acquire ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with acquire ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with release ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with release ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)¶
atomic compare and exchange with relaxed ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long *oldpointer to long value to compare with
long newlong value to assign
Description
If (v == old), atomically updates v to new with relaxed ordering. Otherwise, v is not modified, old is updated to the current value of v, and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
Return
true if the exchange occured, false otherwise.
-
bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v)¶
atomic subtract and test if zero with full ordering
Parameters
long ilong value to subtract
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_long_dec_and_test(atomic_long_t *v)¶
atomic decrement and test if zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v - 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_long_inc_and_test(atomic_long_t *v)¶
atomic increment and test if zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + 1) with full ordering.
Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere.
Return
true if the resulting value of v is zero, false otherwise.
-
bool raw_atomic_long_add_negative(long i, atomic_long_t *v)¶
atomic add and test if negative with full ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with full ordering.
Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)¶
atomic add and test if negative with acquire ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with acquire ordering.
Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v)¶
atomic add and test if negative with release ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with release ordering.
Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)¶
atomic add and test if negative with relaxed ordering
Parameters
long ilong value to add
atomic_long_t *vpointer to atomic_long_t
Description
Atomically updates v to (v + i) with relaxed ordering.
Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere.
Return
true if the resulting value of v is negative, false otherwise.
-
long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)¶
atomic add unless value with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long along value to add
long ulong value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere.
Return
The original value of v.
-
bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)¶
atomic add unless value with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
long along value to add
long ulong value to compare with
Description
If (v != u), atomically updates v to (v + a) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_long_inc_not_zero(atomic_long_t *v)¶
atomic increment unless zero with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v != 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_long_inc_unless_negative(atomic_long_t *v)¶
atomic increment unless negative with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v >= 0), atomically updates v to (v + 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere.
Return
true if v was updated, false otherwise.
-
bool raw_atomic_long_dec_unless_positive(atomic_long_t *v)¶
atomic decrement unless positive with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v <= 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere.
Return
true if v was updated, false otherwise.
-
long raw_atomic_long_dec_if_positive(atomic_long_t *v)¶
atomic decrement if positive with full ordering
Parameters
atomic_long_t *vpointer to atomic_long_t
Description
If (v > 0), atomically updates v to (v - 1) with full ordering. Otherwise, v is not modified and relaxed ordering is provided.
Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere.
Return
The old value of (v - 1), regardless of whether v was updated.
Kernel objects manipulation¶
-
char *kobject_get_path(const struct kobject *kobj, gfp_t gfp_mask)¶
Allocate memory and fill in the path for kobj.
Parameters
const struct kobject *kobjkobject in question, with which to build the path
gfp_t gfp_maskthe allocation type used to allocate the path
Return
The newly allocated memory, caller must free with kfree().
-
int kobject_set_name(struct kobject *kobj, const char *fmt, ...)¶
Set the name of a kobject.
Parameters
struct kobject *kobjstruct kobject to set the name of
const char *fmtformat string used to build the name
...variable arguments
Description
This sets the name of the kobject. If you have already added the
kobject to the system, you must call kobject_rename() in order to
change the name of the kobject.
-
void kobject_init(struct kobject *kobj, const struct kobj_type *ktype)¶
Initialize a kobject structure.
Parameters
struct kobject *kobjpointer to the kobject to initialize
const struct kobj_type *ktypepointer to the ktype for this kobject.
Description
This function will properly initialize a kobject such that it can then
be passed to the kobject_add() call.
After this function is called, the kobject MUST be cleaned up by a call
to kobject_put(), not by a call to kfree directly to ensure that all of
the memory is cleaned up properly.
-
int kobject_add(struct kobject *kobj, struct kobject *parent, const char *fmt, ...)¶
The main kobject add function.
Parameters
struct kobject *kobjthe kobject to add
struct kobject *parentpointer to the parent of the kobject.
const char *fmtformat to name the kobject with.
...variable arguments
Description
The kobject name is set and added to the kobject hierarchy in this function.
If parent is set, then the parent of the kobj will be set to it. If parent is NULL, then the parent of the kobj will be set to the kobject associated with the kset assigned to this kobject. If no kset is assigned to the kobject, then the kobject will be located in the root of the sysfs tree.
Note, no “add” uevent will be created with this call, the caller should set up all of the necessary sysfs files for the object and then call kobject_uevent() with the UEVENT_ADD parameter to ensure that userspace is properly notified of this kobject’s creation.
Return
- If this function returns an error, kobject_put() must be
called to properly clean up the memory associated with the object. Under no instance should the kobject that is passed to this function be directly freed with a call to
kfree(), that can leak memory.If this function returns success,
kobject_put()must also be called in order to properly clean up the memory associated with the object.In short, once this function is called,
kobject_put()MUST be called when the use of the object is finished in order to properly free everything.
-
int kobject_init_and_add(struct kobject *kobj, const struct kobj_type *ktype, struct kobject *parent, const char *fmt, ...)¶
Initialize a kobject structure and add it to the kobject hierarchy.
Parameters
struct kobject *kobjpointer to the kobject to initialize
const struct kobj_type *ktypepointer to the ktype for this kobject.
struct kobject *parentpointer to the parent of this kobject.
const char *fmtthe name of the kobject.
...variable arguments
Description
This function combines the call to kobject_init() and kobject_add().
If this function returns an error, kobject_put() must be called to
properly clean up the memory associated with the object. This is the
same type of error handling after a call to kobject_add() and kobject
lifetime rules are the same here.
-
int kobject_rename(struct kobject *kobj, const char *new_name)¶
Change the name of an object.
Parameters
struct kobject *kobjobject in question.
const char *new_nameobject’s new name
Description
It is the responsibility of the caller to provide mutual exclusion between two different calls of kobject_rename on the same kobject and to ensure that new_name is valid and won’t conflict with other kobjects.
-
int kobject_move(struct kobject *kobj, struct kobject *new_parent)¶
Move object to another parent.
Parameters
struct kobject *kobjobject in question.
struct kobject *new_parentobject’s new parent (can be NULL)
-
void kobject_del(struct kobject *kobj)¶
Unlink kobject from hierarchy.
Parameters
struct kobject *kobjobject.
Description
This is the function that should be called to delete an object
successfully added via kobject_add().
-
struct kobject *kobject_get(struct kobject *kobj)¶
Increment refcount for object.
Parameters
struct kobject *kobjobject.
-
void kobject_put(struct kobject *kobj)¶
Decrement refcount for object.
Parameters
struct kobject *kobjobject.
Description
Decrement the refcount, and if 0, call kobject_cleanup().
-
struct kobject *kobject_create_and_add(const char *name, struct kobject *parent)¶
Create a struct kobject dynamically and register it with sysfs.
Parameters
const char *namethe name for the kobject
struct kobject *parentthe parent kobject of this kobject, if any.
Description
This function creates a kobject structure dynamically and registers it
with sysfs. When you are finished with this structure, call
kobject_put() and the structure will be dynamically freed when
it is no longer being used.
If the kobject was not able to be created, NULL will be returned.
-
int kset_register(struct kset *k)¶
Initialize and add a kset.
Parameters
struct kset *kkset.
NOTE
On error, the kset.kobj.name allocated by() kobj_set_name() is freed, it can not be used any more.
-
void kset_unregister(struct kset *k)¶
Remove a kset.
Parameters
struct kset *kkset.
Parameters
struct kset *ksetkset we’re looking in.
const char *nameobject’s name.
Description
Lock kset via kset->subsys, and iterate over kset->list, looking for a matching kobject. If matching object is found take a reference and return the object.
-
struct kset *kset_create_and_add(const char *name, const struct kset_uevent_ops *uevent_ops, struct kobject *parent_kobj)¶
Create a struct kset dynamically and add it to sysfs.
Parameters
const char *namethe name for the kset
const struct kset_uevent_ops *uevent_opsa struct kset_uevent_ops for the kset
struct kobject *parent_kobjthe parent kobject of this kset, if any.
Description
This function creates a kset structure dynamically and registers it
with sysfs. When you are finished with this structure, call
kset_unregister() and the structure will be dynamically freed when it
is no longer being used.
If the kset was not able to be created, NULL will be returned.
Kernel utility functions¶
-
might_sleep¶
might_sleep ()
annotation for functions that can sleep
Description
this macro will print a stack trace if it is executed in an atomic context (spinlock, irq-handler, ...). Additional sections where blocking is not allowed can be annotated with
non_block_start()andnon_block_end()pairs.This is a useful debugging help to be able to catch problems early and not be bitten later when the calling function happens to sleep when it is not supposed to.
-
cant_sleep¶
cant_sleep ()
annotation for functions that cannot sleep
Description
this macro will print a stack trace if it is executed with preemption enabled
-
cant_migrate¶
cant_migrate ()
annotation for functions that cannot migrate
Description
Will print a stack trace if executed in code which is migratable
-
non_block_start¶
non_block_start ()
annotate the start of section where sleeping is prohibited
Description
This is on behalf of the oom reaper, specifically when it is calling the mmu notifiers. The problem is that if the notifier were to block on, for example,
mutex_lock()and if the process which holds that mutex were to perform a sleeping memory allocation, the oom reaper is now blocked on completion of that memory allocation. Other blocking calls likewait_event()pose similar issues.
-
non_block_end¶
non_block_end ()
annotate the end of section where sleeping is prohibited
Description
Closes a section opened by
non_block_start().
-
trace_printk¶
trace_printk (fmt, ...)
printf formatting in the ftrace buffer
Parameters
fmtthe printf format for printing
...variable arguments
Note
- __trace_printk is an internal function for trace_printk() and
the ip is passed in via the
trace_printk()macro.
Description
This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.
This is intended as a debugging tool for the developer only.
Please refrain from leaving trace_printks scattered around in
your code. (Extra memory is used for special buffers that are
allocated when trace_printk() is used.)
A little optimization trick is done here. If there’s only one
argument, there’s no need to scan the string for printf formats.
The trace_puts() will suffice. But how can we take advantage of
using trace_puts() when trace_printk() has only one argument?
By stringifying the args and checking the size we can tell
whether or not there are args. __stringify((__VA_ARGS__)) will
turn into “()0” with a size of 3 when there are no args, anything
else will be bigger. All we need to do is define a string to this,
and then take its size and compare to 3. If it’s bigger, use
do_trace_printk() otherwise, optimize it to trace_puts(). Then just
let gcc optimize the rest.
-
trace_puts¶
trace_puts (str)
write a string into the ftrace buffer
Parameters
strthe string to record
Note
- __trace_bputs is an internal function for trace_puts and
the ip is passed in via the trace_puts macro.
Description
This is similar to trace_printk() but is made for those really fast
paths that a developer wants the least amount of “Heisenbug” effects,
where the processing of the print format is still too much.
This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.
This is intended as a debugging tool for the developer only.
Please refrain from leaving trace_puts scattered around in
your code. (Extra memory is used for special buffers that are
allocated when trace_puts() is used.)
Return
- 0 if nothing was written, positive # if string was.
(1 when __trace_bputs is used, strlen(str) when __trace_puts is used)
-
void console_list_lock(void)¶
Lock the console list
Parameters
voidno arguments
Description
For console list or console->flags updates
-
void console_list_unlock(void)¶
Unlock the console list
-
int console_srcu_read_lock(void)¶
Register a new reader for the SRCU-protected console list
Parameters
voidno arguments
Description
Use for_each_console_srcu() to iterate the console list
Context
Any context.
Return
A cookie to pass to console_srcu_read_unlock().
-
void console_srcu_read_unlock(int cookie)¶
Unregister an old reader from the SRCU-protected console list
Parameters
int cookiecookie returned from
console_srcu_read_lock()
Description
Counterpart to console_srcu_read_lock()
-
int match_devname_and_update_preferred_console(const char *devname, const char *name, const short idx)¶
Update a preferred console when matching devname is found.
Parameters
const char *devnameDEVNAME:0.0 style device name
const char *nameName of the corresponding console driver, e.g. “ttyS”
const short idxConsole index, e.g. port number.
Description
The function checks whether a device with the given devname is preferred via the console=DEVNAME:0.0 command line option. It fills the missing console driver name and console index so that a later register_console() call could find (match) and enable this device.
It might be used when a driver subsystem initializes particular devices with already known DEVNAME:0.0 style names. And it could predict which console driver name and index this device would later get associated with.
Return
0 on success, negative error code on failure.
-
void console_lock(void)¶
block the console subsystem from printing
Parameters
voidno arguments
Description
Acquires a lock which guarantees that no consoles will be in or enter their write() callback.
Can sleep, returns nothing.
-
int console_trylock(void)¶
try to block the console subsystem from printing
Parameters
voidno arguments
Description
Try to acquire a lock which guarantees that no consoles will be in or enter their write() callback.
returns 1 on success, and 0 on failure to acquire the lock.
-
void console_unlock(void)¶
unblock the legacy console subsystem from printing
Parameters
voidno arguments
Description
Releases the console_lock which the caller holds to block printing of the legacy console subsystem.
While the console_lock was held, console output may have been buffered
by printk(). If this is the case, console_unlock() emits the output on
legacy consoles prior to releasing the lock.
console_unlock(); may be called from any context.
-
void console_conditional_schedule(void)¶
yield the CPU if required
Parameters
voidno arguments
Description
If the console code is currently allowed to sleep, and if this CPU should yield the CPU to another task, do so here.
Must be called within console_lock();.
Parameters
struct console *conThe registered console to force preferred.
Description
Must be called under console_list_lock().
-
bool printk_timed_ratelimit(unsigned long *caller_jiffies, unsigned int interval_msecs)¶
caller-controlled printk ratelimiting
Parameters
unsigned long *caller_jiffiespointer to caller’s state
unsigned int interval_msecsminimum interval between prints
Description
printk_timed_ratelimit() returns true if more than interval_msecs
milliseconds have elapsed since the last time printk_timed_ratelimit()
returned true.
-
int kmsg_dump_register(struct kmsg_dumper *dumper)¶
register a kernel log dumper.
Parameters
struct kmsg_dumper *dumperpointer to the kmsg_dumper structure
Description
Adds a kernel log dumper to the system. The dump callback in the
structure will be called when the kernel oopses or panics and must be
set. Returns zero on success and -EINVAL or -EBUSY otherwise.
-
int kmsg_dump_unregister(struct kmsg_dumper *dumper)¶
unregister a kmsg dumper.
Parameters
struct kmsg_dumper *dumperpointer to the kmsg_dumper structure
Description
Removes a dump device from the system. Returns zero on success and
-EINVAL otherwise.
-
bool kmsg_dump_get_line(struct kmsg_dump_iter *iter, bool syslog, char *line, size_t size, size_t *len)¶
retrieve one kmsg log line
Parameters
struct kmsg_dump_iter *iterkmsg dump iterator
bool sysloginclude the “<4>” prefixes
char *linebuffer to copy the line to
size_t sizemaximum size of the buffer
size_t *lenlength of line placed into buffer
Description
Start at the beginning of the kmsg buffer, with the oldest kmsg record, and copy one record into the provided buffer.
Consecutive calls will return the next available record moving towards the end of the buffer with the youngest messages.
A return value of FALSE indicates that there are no more records to read.
-
bool kmsg_dump_get_buffer(struct kmsg_dump_iter *iter, bool syslog, char *buf, size_t size, size_t *len_out)¶
copy kmsg log lines
Parameters
struct kmsg_dump_iter *iterkmsg dump iterator
bool sysloginclude the “<4>” prefixes
char *bufbuffer to copy the line to
size_t sizemaximum size of the buffer
size_t *len_outlength of line placed into buffer
Description
Start at the end of the kmsg buffer and fill the provided buffer with as many of the youngest kmsg records that fit into it. If the buffer is large enough, all available kmsg records will be copied with a single call.
Consecutive calls will fill the buffer with the next block of available older records, not including the earlier retrieved ones.
A return value of FALSE indicates that there are no more records to read.
-
void kmsg_dump_rewind(struct kmsg_dump_iter *iter)¶
reset the iterator
Parameters
struct kmsg_dump_iter *iterkmsg dump iterator
Description
Reset the dumper’s iterator so that kmsg_dump_get_line() and
kmsg_dump_get_buffer() can be called again and used multiple
times within the same dumper.dump() callback.
-
void __printk_cpu_sync_wait(void)¶
Busy wait until the printk cpu-reentrant spinning lock is not owned by any CPU.
Parameters
voidno arguments
Context
Any context.
-
int __printk_cpu_sync_try_get(void)¶
Try to acquire the printk cpu-reentrant spinning lock.
Parameters
voidno arguments
Description
If no processor has the lock, the calling processor takes the lock and becomes the owner. If the calling processor is already the owner of the lock, this function succeeds immediately.
Context
Any context. Expects interrupts to be disabled.
Return
1 on success, otherwise 0.
-
void __printk_cpu_sync_put(void)¶
Release the printk cpu-reentrant spinning lock.
Parameters
voidno arguments
Description
The calling processor must be the owner of the lock.
Context
Any context. Expects interrupts to be disabled.
-
void panic(const char *fmt, ...)¶
halt the system
Parameters
const char *fmtThe text string to print
Display a message, then perform cleanups.
This function never returns.
...variable arguments
-
void add_taint(unsigned flag, enum lockdep_ok lockdep_ok)¶
add a taint flag if not already set.
Parameters
unsigned flagone of the TAINT_* constants.
enum lockdep_ok lockdep_okwhether lock debugging is still OK.
Description
If something bad has gone wrong, you’ll want lockdebug_ok = false, but for some notewortht-but-not-corrupting cases, it can be set to true.
Device Resource Management¶
-
void *__devres_alloc_node(dr_release_t release, size_t size, gfp_t gfp, int nid, const char *name)¶
Allocate device resource data
Parameters
dr_release_t releaseRelease function devres will be associated with
size_t sizeAllocation size
gfp_t gfpAllocation flags
int nidNUMA node
const char *nameName of the resource
Description
Allocate devres of size bytes. The allocated area is zeroed, then associated with release. The returned pointer can be passed to other devres_*() functions.
Return
Pointer to allocated devres on success, NULL on failure.
-
void devres_for_each_res(struct device *dev, dr_release_t release, dr_match_t match, void *match_data, void (*fn)(struct device*, void*, void*), void *data)¶
Resource iterator
Parameters
struct device *devDevice to iterate resource from
dr_release_t releaseLook for resources associated with this release function
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
void (*fn)(struct device *, void *, void *)Function to be called for each matched resource.
void *dataData for fn, the 3rd parameter of fn
Description
Call fn for each devres of dev which is associated with release and for which match returns 1.
Return
void
-
void devres_free(void *res)¶
Free device resource data
Parameters
void *resPointer to devres data to free
Description
Free devres created with devres_alloc().
Parameters
struct device *devDevice to add resource to
void *resResource to register
Description
Register devres res to dev. res should have been allocated using devres_alloc(). On driver detach, the associated release function will be invoked and devres will be freed automatically.
-
void *devres_find(struct device *dev, dr_release_t release, dr_match_t match, void *match_data)¶
Find device resource
Parameters
struct device *devDevice to lookup resource from
dr_release_t releaseLook for resources associated with this release function
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
Description
Find the latest devres of dev which is associated with release and for which match returns 1. If match is NULL, it’s considered to match all.
Return
Pointer to found devres, NULL if not found.
-
void *devres_get(struct device *dev, void *new_res, dr_match_t match, void *match_data)¶
Find devres, if non-existent, add one atomically
Parameters
struct device *devDevice to lookup or add devres for
void *new_resPointer to new initialized devres to add if not found
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
Description
Find the latest devres of dev which has the same release function as new_res and for which match return 1. If found, new_res is freed; otherwise, new_res is added atomically.
Return
Pointer to found or added devres.
-
void *devres_remove(struct device *dev, dr_release_t release, dr_match_t match, void *match_data)¶
Find a device resource and remove it
Parameters
struct device *devDevice to find resource from
dr_release_t releaseLook for resources associated with this release function
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and returned.
Return
Pointer to removed devres on success, NULL if not found.
-
int devres_destroy(struct device *dev, dr_release_t release, dr_match_t match, void *match_data)¶
Find a device resource and destroy it
Parameters
struct device *devDevice to find resource from
dr_release_t releaseLook for resources associated with this release function
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and freed.
Note that the release function for the resource will not be called, only the devres-allocated data will be freed. The caller becomes responsible for freeing any other data.
Return
0 if devres is found and freed, -ENOENT if not found.
-
int devres_release(struct device *dev, dr_release_t release, dr_match_t match, void *match_data)¶
Find a device resource and destroy it, calling release
Parameters
struct device *devDevice to find resource from
dr_release_t releaseLook for resources associated with this release function
dr_match_t matchMatch function (optional)
void *match_dataData for the match function
Description
Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically, the release function called and the resource freed.
Return
0 if devres is found and freed, -ENOENT if not found.
Parameters
struct device *devDevice to open devres group for
void *idSeparator ID
gfp_t gfpAllocation flags
Description
Open a new devres group for dev with id. For id, using a pointer to an object which won’t be used for another group is recommended. If id is NULL, address-wise unique ID is created.
Return
ID of the new group, NULL on failure.
Parameters
struct device *devDevice to close devres group for
void *idID of target group, can be NULL
Description
Close the group identified by id. If id is NULL, the latest open group is selected.
Parameters
struct device *devDevice to remove group for
void *idID of target group, can be NULL
Description
Remove the group identified by id. If id is NULL, the latest open group is selected. Note that removing a group doesn’t affect any other resources.
Parameters
struct device *devDevice to release group for
void *idID of target group, can be NULL
Description
Release all resources in the group identified by id. If id is NULL, the latest open group is selected. The selected group and groups properly nested inside the selected group are removed.
Return
The number of released non-group resources.
-
int __devm_add_action(struct device *dev, void (*action)(void*), void *data, const char *name)¶
add a custom action to list of managed resources
Parameters
struct device *devDevice that owns the action
void (*action)(void *)Function that should be called
void *dataPointer to data passed to action implementation
const char *nameName of the resource (for debugging purposes)
Description
This adds a custom action to the list of managed resources so that it gets executed as part of standard resource unwinding.
-
int devm_remove_action_nowarn(struct device *dev, void (*action)(void*), void *data)¶
removes previously added custom action
Parameters
struct device *devDevice that owns the action
void (*action)(void *)Function implementing the action
void *dataPointer to data passed to action implementation
Description
Removes instance of action previously added by devm_add_action(). Both action and data should match one of the existing entries.
In contrast to devm_remove_action(), this function does not WARN() if no
entry could have been found.
This should only be used if the action is contained in an object with independent lifetime management, e.g. the Devres rust abstraction.
Causing the warning from regular driver code most likely indicates an abuse of the devres API.
Return
0 on success, -ENOENT if no entry could have been found.
-
void devm_release_action(struct device *dev, void (*action)(void*), void *data)¶
release previously added custom action
Parameters
struct device *devDevice that owns the action
void (*action)(void *)Function implementing the action
void *dataPointer to data passed to action implementation
Description
Releases and removes instance of action previously added by devm_add_action(). Both action and data should match one of the existing entries.
Parameters
struct device *devDevice to allocate memory for
size_t sizeAllocation size
gfp_t gfpAllocation gfp flags
Description
Managed kmalloc. Memory allocated with this function is automatically freed on driver detach. Like all other devres resources, guaranteed alignment is unsigned long long.
Return
Pointer to allocated memory on success, NULL on failure.
-
void *devm_krealloc(struct device *dev, void *ptr, size_t new_size, gfp_t gfp)¶
Resource-managed krealloc()
Parameters
struct device *devDevice to re-allocate memory for
void *ptrPointer to the memory chunk to re-allocate
size_t new_sizeNew allocation size
gfp_t gfpAllocation gfp flags
Description
Managed krealloc(). Resizes the memory chunk allocated with devm_kmalloc().
Behaves similarly to regular krealloc(): if ptr is NULL or ZERO_SIZE_PTR,
it’s the equivalent of devm_kmalloc(). If new_size is zero, it frees the
previously allocated memory and returns ZERO_SIZE_PTR. This function doesn’t
change the order in which the release callback for the re-alloc’ed devres
will be called (except when falling back to devm_kmalloc() or when freeing
resources when new_size is zero). The contents of the memory are preserved
up to the lesser of new and old sizes.
-
char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp)¶
Allocate resource managed space and copy an existing string into that.
Parameters
struct device *devDevice to allocate memory for
const char *sthe string to duplicate
gfp_t gfpthe GFP mask used in the
devm_kmalloc()call when allocating memory
Return
Pointer to allocated string on success, NULL on failure.
-
const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp)¶
resource managed conditional string duplication
Parameters
struct device *devdevice for which to duplicate the string
const char *sthe string to duplicate
gfp_t gfpthe GFP mask used in the
kmalloc()call when allocating memory
Description
Strings allocated by devm_kstrdup_const will be automatically freed when the associated device is detached.
Return
Source string if it is in .rodata section otherwise it falls back to devm_kstrdup.
-
char *devm_kvasprintf(struct device *dev, gfp_t gfp, const char *fmt, va_list ap)¶
Allocate resource managed space and format a string into that.
Parameters
struct device *devDevice to allocate memory for
gfp_t gfpthe GFP mask used in the
devm_kmalloc()call when allocating memoryconst char *fmtThe printf()-style format string
va_list apArguments for the format string
Return
Pointer to allocated string on success, NULL on failure.
-
char *devm_kasprintf(struct device *dev, gfp_t gfp, const char *fmt, ...)¶
Allocate resource managed space and format a string into that.
Parameters
struct device *devDevice to allocate memory for
gfp_t gfpthe GFP mask used in the
devm_kmalloc()call when allocating memoryconst char *fmtThe printf()-style format string
...Arguments for the format string
Return
Pointer to allocated string on success, NULL on failure.
Parameters
struct device *devDevice this memory belongs to
const void *pMemory to free
Description
Free memory allocated with devm_kmalloc().
-
void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp)¶
Resource-managed kmemdup
Parameters
struct device *devDevice this memory belongs to
const void *srcMemory region to duplicate
size_t lenMemory region length
gfp_t gfpGFP mask to use
Description
Duplicate region of a memory using resource managed kmalloc
-
unsigned long devm_get_free_pages(struct device *dev, gfp_t gfp_mask, unsigned int order)¶
Resource-managed __get_free_pages
Parameters
struct device *devDevice to allocate memory for
gfp_t gfp_maskAllocation gfp flags
unsigned int orderAllocation size is (1 << order) pages
Description
Managed get_free_pages. Memory allocated with this function is automatically freed on driver detach.
Return
Address of allocated memory on success, 0 on failure.
Parameters
struct device *devDevice this memory belongs to
unsigned long addrMemory to free
Description
Free memory allocated with devm_get_free_pages(). Unlike free_pages,
there is no need to supply the order.
-
void __percpu *__devm_alloc_percpu(struct device *dev, size_t size, size_t align)¶
Resource-managed alloc_percpu
Parameters
struct device *devDevice to allocate per-cpu memory for
size_t sizeSize of per-cpu memory to allocate
size_t alignAlignment of per-cpu memory to allocate
Description
Managed alloc_percpu. Per-cpu memory allocated with this function is automatically freed on driver detach.
Return
Pointer to allocated memory on success, NULL on failure.
Parameters
struct device *devDevice this memory belongs to
void __percpu *pdataPer-cpu memory to free
Description
Free memory allocated with devm_alloc_percpu().