A few days ago, one 12.1.0.2 2-node cluster node rebooted in one of our client’s site. The reason was related to some SAN hardware misconfiguration regarding how the machines access the network using a 10GbE connection. The issue happened again next night in the other cluster node inmediately when the scheduled Data Pump exports started to run. Hardware support identified the problem as a saturation in a blade chassis port, and this induced the node reboot as a system protection activity. Don’t know much more about the technical detailes.
When sysadmins reviewed the storage array accesses from the Oracle servers, they saw the port saturation came from a high IO activity on the LUNs that are members of a ASM diskgroup with an ACFS filsystem on it. We had already seen this king of activity before, and it is generated by a process called ACFS resilvering.
What is resilvering?
It’s a concept asociated to a disk mirror rebuild. Resilvering is a very appropriate word as it comes form the action of repairing the damaged parts of a glass mirror using silver. This silver (or aluminiun) layer is the one that gives a glass it’s mirror properties.
When there’s an issue in a disk mirror (i.e. when we loose one of the member disks), resilvering process is the one which, once the new disk is available, re-establishes data synchronization into the new configuration. We can usually find this process being named as rebuild or resync, but resilver is used also in with ZFS, a concetp similar to ACFS as it is acombination of a filesystem and a volume manager. ZFS was designed by Sun Microsystems before being adquired by Oracle. Reading ZFS features, we can find:
The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.
Some functional features are shared with ACFS.
Why is a resilver operation running?
In our case, resilvering process was running on a ACFS filesystem with almost 2TiBs size. ASM diskgroup used by the ADVM volume, and the ACFS filesystem were createn on LUNs with a restrictive QoS policy at storage array level. Resilvering proccess needed more than 10h to complete due to this IO restriction.
There’s no much official information regarding resilvering in the official ASM / ACFS / ADVM doc. There’s not much more information in the Internet. Most of the specific information in MOS that I can find, is related to bugs, just like the one treated in Note 2071755.1 where a high CPU consumption by asmResilver1 process is described. I can find more resilvering information when it’s related to Oracle Enginereed Systems Exadata and ODA, and even for these configurations I cannot find useful documentation explaining the general concepts. At this step, my recommendation is starting by reading this post and this post from Erman Arslan’s blog.
As Erman tells in his posts, there are two different processes, asmResilver1 and asmResilver2 (in ODA systems). asmResilver1 starts working after a crash, just like after an ASM failure, and cheks if there’s any region in the volume that needs recovery and performs it if needed. asmResilver2 runs after a cluster reconfiguration, just like after stopping one RAC member. This procees verifies if any recovery operation is needed at ADVM level.
ADVM resilvering is completely different from ASM rebalancing operation in any regular ASM diskgroup when a change in the storage is performed. Rebalancing operations affect ADVM volumes too, independently of the needed resilvering operations that upper level volume could need. In a Linux server, we can see resilvering activity in /var/log/messages, where ADVM driver dumps its output.
ADVMK-0010: Mirror recovery for volume VOLACFS in diskgroup ACFS started
In the next entries, ADVM is indicating when a reconfiguration is starting, forcing resilver processes to check the situation. In our example, we found three entries, one for the beginning of cluster reconfiguration, and two more messages about the copmpletion of the reconfiguration coming form each of the cluster members.
ADVMK-0013: Cluster reconfiguration started. ADVMK-0014: Cluster reconfiguration completed. ADVMK-0014: Cluster reconfiguration completed.
Resilvering related processes
In our environment, after one node crashed and rebooted, we also found two different resilver processes, though in our case they ware named asmResilver and asmResilver2m and they are executed as root. In the analyzed ocurrence of the issue, asmResilver is in charge of validating the data in the volumen, being asmResilver2 responsible of DRL metadata validation. Both processes execute in the node which starts the resilvering operation, and I did not detect anay similar process in the other node, even after it joined the cluster again. Looking into ASM performance views, I did not find any gv$asm_operation, though this view is able to show resilvering ionformation as we can find in Oracle’s documentation, but that’s just for a specific scenario, involving Exadata’s WriteBack FlashCache:
RESILVER– This value appears in Oracle Exadata environments when WriteBack FlashCache is enabled
I could not identify in the OS any specific trace file for theses proceses. What I found is a considerably high amount of ADVM processes running, much higher than those running in the node not performing the resilvering.
[root@nodo1 ~]# ps -ef | grep ADVM root 11786 2 0 12:11 ? 00:00:00 [ADVM] root 11787 2 0 12:11 ? 00:00:00 [ADVM] root 25444 2 0 Apr26 ? 00:00:02 [ADVM] root 29858 2 0 12:19 ? 00:00:00 [ADVM] [oracle@nodo2 ~]$ ps -ef | grep ADVM root 12347 2 3 12:45 ? 00:00:46 [ADVM] root 23453 2 2 12:47 ? 00:00:35 [ADVM] root 23454 2 3 12:47 ? 00:00:44 [ADVM] root 23470 2 2 12:47 ? 00:00:39 [ADVM] root 23472 2 3 12:47 ? 00:00:44 [ADVM] root 23473 2 2 12:47 ? 00:00:29 [ADVM] root 23485 2 2 12:47 ? 00:00:36 [ADVM] root 23486 2 1 12:47 ? 00:00:27 [ADVM] root 23487 2 1 12:47 ? 00:00:27 [ADVM] root 23505 2 2 12:47 ? 00:00:33 [ADVM] root 23506 2 2 12:47 ? 00:00:33 [ADVM] root 23508 2 3 12:47 ? 00:00:44 [ADVM] root 23509 2 2 12:47 ? 00:00:42 [ADVM] root 23511 2 2 12:47 ? 00:00:38 [ADVM]
Finally I found in the logs theses process being called UsmMonitor.
Where to look for resilvering information
Now, if we go to MOS Note , we can see the way of finding ADVM’s log, by means of command line or by menas of taking a look at memory through /procs/oks/log. This file is OKS (Oracle Kernel Services) driver memory’s trace, one of the 3 kernel drivers related to ACFS (OKS, ACFS and ADVM). Oracle packs the 3 drivers in what it calls USM kernel drivers. We can see the corresponding .ko files in the usm folder inside kernel’s modules path.
[root@nodo acfs]# ls -la /lib/modules/3.10.0-229.1.2.el7.x86_64/extra/usm/ total 65132 drwxr-xr-x 2 root root 4096 May 4 2016 . drwxr-xr-x 3 root root 4096 May 4 2016 .. -rw-r--r-- 1 root root 45106765 May 4 2016 oracleacfs.ko -rw-r--r-- 1 root root 12644673 May 4 2016 oracleadvm.ko -rw-r--r-- 1 root root 8926957 May 4 2016 oracleoks.ko
Starting with 11.2.0.3, this log is also available at disk level, and it is written into $GRID_BASE/crsdata/<hostname>/acfs in 12c or $GRID_HOME/log/<hostname>/acfs/kernel in 11gR2. Let’s look for further information in it:
V 4294808.419/170427124724 asmResilver[23471] Asm_resilverVolume: EXPORT.volexpdp-440: resilver start V 4294808.419 asmResilver[23471] Asm_startResilveringOps: EXPORT.volexpdp-440: initial broadcast recovProg=zero K 4294825.656/170427124741 UsmMonitor[12350] Ks_reapZombies/ADVM free cb 0xffff880b0351a200/asmResilver ncbs=1 nops=1 runTm=17.651170 completions=0 completions Tm=0.000000 V 4298071.340 asmResilver[23471] Asm_waitResilveringDone: EXPORT.volexpdp-440: setting valb 0x18446744073709551614 V 4298071.340/170427134147 asmResilver[23471] Asm_freeResilverInfo: EXPORT.volexpdp-440: 2596096 regions 166150144 I/Os, 3263 secs
Now we can access aditional information than tah beind output in the messages file. Now, execution time and number of IOs performed is available for us, being able to get more information about what is happening when a resilver operation is going on.
If we are running at least a 12.1.0.2 GI, we have binary logging for USM drivers. This means that exactly in the same path as oks log, we can find binary log files (they are not text editable), that can be useful if we need to open SR. We already have a event file, where resilvering events are registered among other information. This is agood place to find historical information about our resilvering processes.
[root@nodo2 acfs]# grep -i resilver event.log 2016-05-18 10:17:53.446: asmResilver2[9027] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-05-18 10:17:53.743: asmResilver2[9027] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-01 14:41:55.352: asmResilver2[11555] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-01 14:41:55.412: asmResilver2[11555] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-02 15:27:39.535: asmResilver[6687] Volume ACFS.volacfscws-300: Mirror recovery start 2016-06-02 15:27:39.543: asmResilver[6687] Volume ACFS.volacfscws-300: Mirror recovery done 2016-06-02 15:27:39.778: asmResilver[8226] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-02 15:27:39.844: asmResilver[8226] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-03 19:25:47.052: asmResilver2[1940] Volume ACFS.volacfscws-300: Mirror recovery start 2016-06-03 19:25:47.055: asmResilver2[1940] Volume ACFS.volacfscws-300: Mirror recovery done 2016-06-03 19:25:47.232: asmResilver2[6660] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-03 19:25:47.333: asmResilver2[6660] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-07 11:33:32.637: asmResilver2[5345] Volume ACFS.volacfscws-300: Mirror recovery start 2016-06-07 11:33:32.640: asmResilver2[5345] Volume ACFS.volacfscws-300: Mirror recovery done 2016-06-07 11:33:32.857: asmResilver2[6660] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-07 11:33:32.961: asmResilver2[6660] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-07 16:50:33.263: asmResilver2[11470] Volume ACFS.volacfscws-300: Mirror recovery start 2016-06-07 16:50:33.267: asmResilver2[11470] Volume ACFS.volacfscws-300: Mirror recovery done 2016-06-07 16:50:33.671: asmResilver2[9805] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-07 16:50:33.777: asmResilver2[9805] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-06-07 20:22:10.277: asmResilver2[9843] Volume ACFS.volacfscws-300: Mirror recovery start 2016-06-07 20:22:10.281: asmResilver2[9843] Volume ACFS.volacfscws-300: Mirror recovery done 2016-06-07 20:22:10.410: asmResilver2[9842] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-06-07 20:22:10.478: asmResilver2[9842] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-09-03 18:48:29.779: asmResilver2[4253] Volume ACFS.volacfscws-300: Mirror recovery start 2016-09-03 18:48:30.173: asmResilver2[4252] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-09-03 18:49:10.761: asmResilver2[4253] Volume ACFS.volacfscws-300: Mirror recovery done 2016-09-03 18:53:19.050: asmResilver2[4252] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2016-09-03 18:53:19.242: asmResilver2[24255] Volume EXPORTORA.volexpdp-226: Mirror recovery start 2016-09-03 20:54:09.219: asmResilver2[24255] Volume EXPORTORA.volexpdp-226: Mirror recovery done 2017-04-11 10:11:23.334: asmResilver2[30137] Volume ACFS.volacfscws-300: Mirror recovery start 2017-04-11 10:11:23.989: asmResilver2[612] Volume EXPORT.volexpdp-440: Mirror recovery start 2017-04-11 10:11:48.686: asmResilver2[30137] Volume ACFS.volacfscws-300: Mirror recovery done 2017-04-11 10:16:51.762: asmResilver2[612] Volume EXPORT.volexpdp-440: Mirror recovery done 2017-04-11 10:16:51.966: asmResilver2[612] Volume EXPORT.volexpdp-440: Mirror recovery start 2017-04-12 12:40:15.534: asmResilver[24516] Volume EXPORT.volexpdp-440: Mirror recovery start 2017-04-12 12:46:31.157: asmResilver[24516] Volume EXPORT.volexpdp-440: Mirror recovery done 2017-04-12 15:16:12.374: asmResilver[24573] Volume EXPORT.volexpdp-440: Mirror recovery start 2017-04-12 16:49:11.462: asmResilver[24573] Volume EXPORT.volexpdp-440: Mirror recovery done 2017-04-27 12:47:24.115: asmResilver[23453] Volume ACFS.volacfscws-300: Mirror recovery start 2017-04-27 12:47:24.530: asmResilver[23471] Volume EXPORT.volexpdp-440: Mirror recovery start 2017-04-27 12:47:41.767: asmResilver[23453] Volume ACFS.volacfscws-300: Mirror recovery done 2017-04-27 13:41:47.451: asmResilver[23471] Volume EXPORT.volexpdp-440: Mirror recovery done
DRL
Resilvering is a mirror rebuild process bases on DRL (Dirty Region Logging), a common and classic technique used by some other vendors (i.e. Veritas Volume Manager). It’s a similar concept as journaling for filesystems like ext4. ADVM logs mention explicitly this concept. When the ADVM volume is started, we find a reference to a dirty bit verification and we can see that our 1,8TiBs are represented by a 17MiBs bitmap with a total of 7.167.232 bits).
V 4294808.039 multipathd[720] AsmVolStateOpen: EXPORT.volexpdp-440: checking for recovery, DRL segment: 0, DRL size: 17M V 4294808.419 multipathd[720] Asm_logBits: EXPORT.volexpdp-440: Asm_buildRecovMap: recovProg -1, dirty bits = 2596096/7167232 36%
Regions
If we take a look at the previous Varitas Link where a bitmap is used for representing the volume storage and identifying the regions marked as dirty at mirror level. Each bit in the bitmap represents, in Verita’s example, 1024 sectors. In ADVM we find a region is sized in sector units, each one containing 512.
V 4294808.039 multipathd[720] Asm_drlVolAttach: <strong>512 blks/region</strong>, 219 pgs, -625344512 blks/vol, 7167231 voldEndB 128 pgNrpcr
Resilvering monitoring
With a cat or tail operation on /proc/oks/log or on acfs.log.0 if we are running 12.1.0.2 or higher, we can identify the percentage of process pending dirty regions that asmResilver must perform. We can filter the log using ASM_logBits string and the ADVM volume name we are interested on to check the process evolution:
[root@nodo2 ~]# grep Asm_logBits /proc/oks/log | grep volexpdp-440 V 4294808.419 multipathd[720] Asm_logBits: EXPORT.volexpdp-440: Asm_buildRecovMap: recovProg -1, dirty bits = 2596096/7167232 36% V 4295108.276 UsmRslvrUpd:PRO[23454] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 344814079, dirty bits = 2365134/7167232 32% V 4295408.286 UsmRslvrUpd:PRO[23485] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 664690175, dirty bits = 2120532/7167232 29% V 4295708.312 UsmRslvrUpd:PRO[23509] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 1003418111, dirty bits = 1892618/7167232 26% V 4296008.304 UsmRslvrUpd:PRO[23473] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 1299421183, dirty bits = 1664439/7167232 23% V 4296308.250 UsmRslvrUpd:PRO[23453] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 1622847999, dirty bits = 1420604/7167232 19% V 4296608.332 UsmRslvrUpd:PRO[23511] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 1976684031, dirty bits = 1181212/7167232 16% V 4296908.305 UsmRslvrUpd:PRO[23506] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 2339923967, dirty bits = 931023/7167232 12% V 4297208.364 UsmRslvrUpd:PRO[23505] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 2677802495, dirty bits = 684544/7167232 9% V 4297508.303 UsmRslvrUpd:PRO[23453] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 3034652671, dirty bits = 441843/7167232 6% V 4297808.358 UsmRslvrUpd:PRO[23511] Asm_logBits: EXPORT.volexpdp-440: Asm_writeRecovMap: recovProg 3349218815, dirty bits = 217472/7167232 3%
Once asmResilver has completed, asmResilver2 starts running, and it seems to use one specific process for each mounted ACFS filesystem as I interpret from the logs. This second part is much faster than data resilvering, and verifies and corrects metadata issues at DRL level for mirror consistency. We can find its activity in this same log file:
V 4298172.073 asmResilver2[23505] Asm_acqRecovery: EXPORT.volexpdp-440: rsid1=1b80101 rsid2=0 V 4298172.073 asmResilver2[23505] Asm_acqRecovery: EXPORT.volexpdp-440: value block INVALID V 4298172.073 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 1 V 4298172.073 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: odlm_lock returned 36 V 4298172.073 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 2 V 4298172.082 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 2 is clean V 4298172.082 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 3 V 4298172.086 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 3 is clean V 4298172.086 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 4 V 4298172.090 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 4 is clean V 4298172.090 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 5 V 4298172.094 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 5 is clean V 4298172.094 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 6 V 4298172.104 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 6 is clean V 4298172.104 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 7 V 4298172.111 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 7 is clean V 4298172.111 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 8 V 4298172.118 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 8 is clean V 4298172.118 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 9 V 4298172.122 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 9 is clean V 4298172.122 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 10 V 4298172.128 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 10 is clean V 4298172.128 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 11 V 4298172.132 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 11 is clean V 4298172.132 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 12 V 4298172.141 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 12 is clean V 4298172.141 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 13 V 4298172.150 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 13 is clean V 4298172.150 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 14 V 4298172.156 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 14 is clean V 4298172.156 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 15 V 4298172.160 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 15 is clean V 4298172.160 asmResilver2[23505] Asm_acqSegment: EXPORT.volexpdp-440: called for SEG 16 V 4298172.165 asmResilver2[23505] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 16 is clean V 4298172.213 asmResilver2[23505] Asm_startRecovery: EXPORT.volexpdp-440: is clean, no rcvy needed V 4298172.215 asmResilver2[23505] Asm_startRecovery: EXPORT.volexpdp-440: mirror recovery thread exiting
Does a resilvering operation impact all the ACFS volume?
For me, one of the mayor questions about this process was to know if we need to process all the data in a ACFS filesystem after a system crash, something that seemed to be too conservative. After checking the logs in a real crash, it’s clear for me this is not what happens, as it depends on how many dirty regions a resilver process find in the beggining of the rebuild. If we check our logs looking for historical data, we can see for the same volume different IO loads (warning, the grep condition for this output could be not too exhaustive!)
[root@nodo1 acfs]# grep Asm_logBits acfs.log.0 | grep multipathd V 4294772.311 multipathd[689] Asm_logBits: EXPORTORA.volexpdp-226: Asm_buildRecovMap: recovProg -1, dirty bits = 134144/7167232 1% V 4294777.747 multipathd[691] Asm_logBits: EXPORTORA.volexpdp-226: Asm_buildRecovMap: recovProg -1, dirty bits = 0/7167232 0% [root@nodo2 acfs]# grep Asm_logBits acfs.log.0 | grep multipathd V 4302370.602 multipathd[694] Asm_logBits: EXPORTORA.volexpdp-226: Asm_buildRecovMap: recovProg -1, dirty bits = 0/7167232 0% V 4294799.421 multipathd[703] Asm_logBits: EXPORT.volexpdp-440: Asm_buildRecovMap: recovProg -1, dirty bits = 5164001/7167232 72% V 4294808.419 multipathd[720] Asm_logBits: EXPORT.volexpdp-440: Asm_buildRecovMap: recovProg -1, dirty bits = 2596096/7167232 36%
So, a crash won’t generate always the same IO activity for a ADVM volume, and this is dependent on the number of dirty regions after the crash:
V 4294808.039 multipathd[720] AsmVolStateOpen: EXPORT.volexpdp-440: checking for recovery, DRL segment: 0, DRL size: 17M V 4294808.050 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 1 is clean <strong>V 4294808.057 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 2 is dirty</strong> V 4294808.109 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 3 is clean V 4294808.115 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 4 is clean V 4294808.121 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 5 is clean V 4294808.130 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 6 is clean V 4294808.138 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 7 is clean V 4294808.143 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 8 is clean V 4294808.150 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 9 is clean V 4294808.154 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 10 is clean V 4294808.162 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 11 is clean V 4294808.167 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 12 is clean V 4294808.170 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 13 is clean V 4294808.170 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 14 is clean V 4294808.184 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 15 is clean V 4294808.192 multipathd[720] Asm_buildRecovMap: EXPORT.volexpdp-440: DRL segment 16 is clean
Pairing
Talking about mirrors, first thing coming to my mind is Tommy growing looking into a mirror after his childhood trauma, and his mother screaming “Go to the mirror!”… Look at him in the mirror dreaming, what is happening in his head?

