1Devres - Managed Device Resource 2================================ 3 4Tejun Heo <teheo@suse.de> 5 6First draft 10 January 2007 7 8 91. Intro : Huh? Devres? 102. Devres : Devres in a nutshell 113. Devres Group : Group devres'es and release them together 124. Details : Life time rules, calling context, ... 135. Overhead : How much do we have to pay for this? 146. List of managed interfaces : Currently implemented managed interfaces 15 16 17 1. Intro 18 -------- 19 20devres came up while trying to convert libata to use iomap. Each 21iomapped address should be kept and unmapped on driver detach. For 22example, a plain SFF ATA controller (that is, good old PCI IDE) in 23native mode makes use of 5 PCI BARs and all of them should be 24maintained. 25 26As with many other device drivers, libata low level drivers have 27sufficient bugs in ->remove and ->probe failure path. Well, yes, 28that's probably because libata low level driver developers are lazy 29bunch, but aren't all low level driver developers? After spending a 30day fiddling with braindamaged hardware with no document or 31braindamaged document, if it's finally working, well, it's working. 32 33For one reason or another, low level drivers don't receive as much 34attention or testing as core code, and bugs on driver detach or 35initialization failure don't happen often enough to be noticeable. 36Init failure path is worse because it's much less travelled while 37needs to handle multiple entry points. 38 39So, many low level drivers end up leaking resources on driver detach 40and having half broken failure path implementation in ->probe() which 41would leak resources or even cause oops when failure occurs. iomap 42adds more to this mix. So do msi and msix. 43 44 45 2. Devres 46 --------- 47 48devres is basically linked list of arbitrarily sized memory areas 49associated with a struct device. Each devres entry is associated with 50a release function. A devres can be released in several ways. No 51matter what, all devres entries are released on driver detach. On 52release, the associated release function is invoked and then the 53devres entry is freed. 54 55Managed interface is created for resources commonly used by device 56drivers using devres. For example, coherent DMA memory is acquired 57using dma_alloc_coherent(). The managed version is called 58dmam_alloc_coherent(). It is identical to dma_alloc_coherent() except 59for the DMA memory allocated using it is managed and will be 60automatically released on driver detach. Implementation looks like 61the following. 62 63 struct dma_devres { 64 size_t size; 65 void *vaddr; 66 dma_addr_t dma_handle; 67 }; 68 69 static void dmam_coherent_release(struct device *dev, void *res) 70 { 71 struct dma_devres *this = res; 72 73 dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle); 74 } 75 76 dmam_alloc_coherent(dev, size, dma_handle, gfp) 77 { 78 struct dma_devres *dr; 79 void *vaddr; 80 81 dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp); 82 ... 83 84 /* alloc DMA memory as usual */ 85 vaddr = dma_alloc_coherent(...); 86 ... 87 88 /* record size, vaddr, dma_handle in dr */ 89 dr->vaddr = vaddr; 90 ... 91 92 devres_add(dev, dr); 93 94 return vaddr; 95 } 96 97If a driver uses dmam_alloc_coherent(), the area is guaranteed to be 98freed whether initialization fails half-way or the device gets 99detached. If most resources are acquired using managed interface, a 100driver can have much simpler init and exit code. Init path basically 101looks like the following. 102 103 my_init_one() 104 { 105 struct mydev *d; 106 107 d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL); 108 if (!d) 109 return -ENOMEM; 110 111 d->ring = dmam_alloc_coherent(...); 112 if (!d->ring) 113 return -ENOMEM; 114 115 if (check something) 116 return -EINVAL; 117 ... 118 119 return register_to_upper_layer(d); 120 } 121 122And exit path, 123 124 my_remove_one() 125 { 126 unregister_from_upper_layer(d); 127 shutdown_my_hardware(); 128 } 129 130As shown above, low level drivers can be simplified a lot by using 131devres. Complexity is shifted from less maintained low level drivers 132to better maintained higher layer. Also, as init failure path is 133shared with exit path, both can get more testing. 134 135 136 3. Devres group 137 --------------- 138 139Devres entries can be grouped using devres group. When a group is 140released, all contained normal devres entries and properly nested 141groups are released. One usage is to rollback series of acquired 142resources on failure. For example, 143 144 if (!devres_open_group(dev, NULL, GFP_KERNEL)) 145 return -ENOMEM; 146 147 acquire A; 148 if (failed) 149 goto err; 150 151 acquire B; 152 if (failed) 153 goto err; 154 ... 155 156 devres_remove_group(dev, NULL); 157 return 0; 158 159 err: 160 devres_release_group(dev, NULL); 161 return err_code; 162 163As resource acquisition failure usually means probe failure, constructs 164like above are usually useful in midlayer driver (e.g. libata core 165layer) where interface function shouldn't have side effect on failure. 166For LLDs, just returning error code suffices in most cases. 167 168Each group is identified by void *id. It can either be explicitly 169specified by @id argument to devres_open_group() or automatically 170created by passing NULL as @id as in the above example. In both 171cases, devres_open_group() returns the group's id. The returned id 172can be passed to other devres functions to select the target group. 173If NULL is given to those functions, the latest open group is 174selected. 175 176For example, you can do something like the following. 177 178 int my_midlayer_create_something() 179 { 180 if (!devres_open_group(dev, my_midlayer_create_something, GFP_KERNEL)) 181 return -ENOMEM; 182 183 ... 184 185 devres_close_group(dev, my_midlayer_create_something); 186 return 0; 187 } 188 189 void my_midlayer_destroy_something() 190 { 191 devres_release_group(dev, my_midlayer_create_something); 192 } 193 194 195 4. Details 196 ---------- 197 198Lifetime of a devres entry begins on devres allocation and finishes 199when it is released or destroyed (removed and freed) - no reference 200counting. 201 202devres core guarantees atomicity to all basic devres operations and 203has support for single-instance devres types (atomic 204lookup-and-add-if-not-found). Other than that, synchronizing 205concurrent accesses to allocated devres data is caller's 206responsibility. This is usually non-issue because bus ops and 207resource allocations already do the job. 208 209For an example of single-instance devres type, read pcim_iomap_table() 210in lib/devres.c. 211 212All devres interface functions can be called without context if the 213right gfp mask is given. 214 215 216 5. Overhead 217 ----------- 218 219Each devres bookkeeping info is allocated together with requested data 220area. With debug option turned off, bookkeeping info occupies 16 221bytes on 32bit machines and 24 bytes on 64bit (three pointers rounded 222up to ull alignment). If singly linked list is used, it can be 223reduced to two pointers (8 bytes on 32bit, 16 bytes on 64bit). 224 225Each devres group occupies 8 pointers. It can be reduced to 6 if 226singly linked list is used. 227 228Memory space overhead on ahci controller with two ports is between 300 229and 400 bytes on 32bit machine after naive conversion (we can 230certainly invest a bit more effort into libata core layer). 231 232 233 6. List of managed interfaces 234 ----------------------------- 235 236CLOCK 237 devm_clk_get() 238 devm_clk_put() 239 devm_clk_hw_register() 240 devm_of_clk_add_hw_provider() 241 242DMA 243 dmaenginem_async_device_register() 244 dmam_alloc_coherent() 245 dmam_alloc_attrs() 246 dmam_declare_coherent_memory() 247 dmam_free_coherent() 248 dmam_pool_create() 249 dmam_pool_destroy() 250 251GPIO 252 devm_gpiod_get() 253 devm_gpiod_get_index() 254 devm_gpiod_get_index_optional() 255 devm_gpiod_get_optional() 256 devm_gpiod_put() 257 devm_gpiochip_add_data() 258 devm_gpiochip_remove() 259 devm_gpio_request() 260 devm_gpio_request_one() 261 devm_gpio_free() 262 263IIO 264 devm_iio_device_alloc() 265 devm_iio_device_free() 266 devm_iio_device_register() 267 devm_iio_device_unregister() 268 devm_iio_kfifo_allocate() 269 devm_iio_kfifo_free() 270 devm_iio_triggered_buffer_setup() 271 devm_iio_triggered_buffer_cleanup() 272 devm_iio_trigger_alloc() 273 devm_iio_trigger_free() 274 devm_iio_trigger_register() 275 devm_iio_trigger_unregister() 276 devm_iio_channel_get() 277 devm_iio_channel_release() 278 devm_iio_channel_get_all() 279 devm_iio_channel_release_all() 280 281INPUT 282 devm_input_allocate_device() 283 284IO region 285 devm_release_mem_region() 286 devm_release_region() 287 devm_release_resource() 288 devm_request_mem_region() 289 devm_request_region() 290 devm_request_resource() 291 292IOMAP 293 devm_ioport_map() 294 devm_ioport_unmap() 295 devm_ioremap() 296 devm_ioremap_nocache() 297 devm_ioremap_wc() 298 devm_ioremap_resource() : checks resource, requests memory region, ioremaps 299 devm_iounmap() 300 pcim_iomap() 301 pcim_iomap_regions() : do request_region() and iomap() on multiple BARs 302 pcim_iomap_table() : array of mapped addresses indexed by BAR 303 pcim_iounmap() 304 305IRQ 306 devm_free_irq() 307 devm_request_any_context_irq() 308 devm_request_irq() 309 devm_request_threaded_irq() 310 devm_irq_alloc_descs() 311 devm_irq_alloc_desc() 312 devm_irq_alloc_desc_at() 313 devm_irq_alloc_desc_from() 314 devm_irq_alloc_descs_from() 315 devm_irq_alloc_generic_chip() 316 devm_irq_setup_generic_chip() 317 devm_irq_sim_init() 318 319LED 320 devm_led_classdev_register() 321 devm_led_classdev_unregister() 322 323MDIO 324 devm_mdiobus_alloc() 325 devm_mdiobus_alloc_size() 326 devm_mdiobus_free() 327 328MEM 329 devm_free_pages() 330 devm_get_free_pages() 331 devm_kasprintf() 332 devm_kcalloc() 333 devm_kfree() 334 devm_kmalloc() 335 devm_kmalloc_array() 336 devm_kmemdup() 337 devm_kstrdup() 338 devm_kvasprintf() 339 devm_kzalloc() 340 341MFD 342 devm_mfd_add_devices() 343 344MUX 345 devm_mux_chip_alloc() 346 devm_mux_chip_register() 347 devm_mux_control_get() 348 349PER-CPU MEM 350 devm_alloc_percpu() 351 devm_free_percpu() 352 353PCI 354 devm_pci_alloc_host_bridge() : managed PCI host bridge allocation 355 devm_pci_remap_cfgspace() : ioremap PCI configuration space 356 devm_pci_remap_cfg_resource() : ioremap PCI configuration space resource 357 pcim_enable_device() : after success, all PCI ops become managed 358 pcim_pin_device() : keep PCI device enabled after release 359 360PHY 361 devm_usb_get_phy() 362 devm_usb_put_phy() 363 364PINCTRL 365 devm_pinctrl_get() 366 devm_pinctrl_put() 367 devm_pinctrl_register() 368 devm_pinctrl_unregister() 369 370POWER 371 devm_reboot_mode_register() 372 devm_reboot_mode_unregister() 373 374PWM 375 devm_pwm_get() 376 devm_pwm_put() 377 378REGULATOR 379 devm_regulator_bulk_get() 380 devm_regulator_get() 381 devm_regulator_put() 382 devm_regulator_register() 383 384RESET 385 devm_reset_control_get() 386 devm_reset_controller_register() 387 388SERDEV 389 devm_serdev_device_open() 390 391SLAVE DMA ENGINE 392 devm_acpi_dma_controller_register() 393 394SPI 395 devm_spi_register_master() 396 397WATCHDOG 398 devm_watchdog_register_device() 399