Wednesday, July 11, 2007

Lungs and High Frequency Ventilation: The Perfect Blend of Man and Machine

Today, I will discuss the project I have been working on for the past few weeks. It involves lungs. Please enjoy reading about lungs, because I truly enjoyed writing about them.

A brief introduction to lungs
What do dinosaurs, ninjas, and The Fonz all have in common? If you said “they are all cool” then you are correct. Another acceptable (and more obvious) answer is that they all have lungs.
Why is this more obvious? If they didn’t have lungs, they would be unable breathe, which would be most uncool. Imagine, for example, that The Fonz couldn’t breath. What would happen? For one thing, he wouldn’t be able to say “Ayyyyy”, the very line that earned him an honorary PhD in coolness. I'm sure you now realize why the lungs are so important. But what do they do?

The lungs are just one part of the respiratory system. The primary goal of the respiratory system is to deliver oxygen to the blood and to remove carbon dioxide from the blood to the environment. The respiratory system consists of the controller (signal generator in the brain), the pump (muscles such as the diaphragm and intercostals), and the site of gas flow and exchange (the lungs). Respiration occurs by pumping fresh, oxygenated air in through the trachea and delivering it to the alveoli, which are air sacs that form a blood/gas interface with capillaries to perform the gas exchange. By maintaining concentration gradients, oxygen diffuses through the alveoli to the blood while the carbon dioxide diffuses from the blood to the alveoli.

The trachea splits (bifurcates) in two directions to form airways in the left and right lungs. These airways continue to bifurcate for 15 more generations and become smaller each time. The branches of generation 16 are known as the terminal bronchioles. Up until this point the gas is transported by convection and no gas exchange occurs. Consequently, the first 16 generations are termed “anatomic dead space,” which sounds like something out of a science fiction movie. The next 7 generations are known as the “respiratory zone,” which consists of airways lined with alveoli that end in alveolar sacs (acini). The average person has 300 million alveoli, and each is of about 100 micrometers in diameter. They contribute collectively to a surface area of up to 100 square meters. This huge surface area is ideal for fast gas diffusion, since diffusion is proportional to the surface area.

High frequency oscillatory ventilation (HFOV)
Sometimes, for a variety of causes, the lungs will fail. At this point, the person will have two options: die, or be temporarily transformed into a cyborg. Those who choose the latter are placed on a mechanical ventilator, which allows control over the pressure, volume, and rate of air delivered to the alveoli. The volume of air in a single breath is known as the "tidal volume." Conventional ventilators can have a harmful effect on infants because normal tidal volumes may overstretch the lungs. HFOV is a highly effective alternative, which uses high frequency (10 - 15 breaths per second) and smaller tidal volumes, which reduces the risk of lung damage.

It is important to note that nobody (including The Fonz) really understands why HFOV works so well. Specifically, why do small tidal volumes at high frequencies ventilate so well? How strongly does the frequency depend on the geometry of the lung? We would like to know the answers, so we are in the process of constructing a mathematical model. I am currently continuing the work of the last two generations of immersion participants.

The simplest approach is to consider three parameters of airflow in the lungs: resistance, inertance, and compliance. Resistance arises from the geometry of the airway and the viscosity of air, and is essentially the proportionality constant between pressure and flowrate. Inertance is related to the force needed to accelerate air. In both a static model and low frequency spontaneous breathing, this term is absent. However, when you have high frequency oscillations in pressure (as in the case of HFOV), the air constantly has to be accelerated and decelerated. Consequently, inertance will play a much larger role. Finally, compliance is related to the “stretchiness” of the airway. It is defined as the change in volume per change in pressure. High compliance means stretchy, while low compliance means rigid. These three properties are analogous to resistance, inductance, and capacitance in electronic circuits. As a result, much insight regarding frequency response has been gained from studying RLC circuits.

In the past, Dr. Frayer and his students have attempted to model most of the major airways (down to generation 12) by constructing branched RLC circuits and solving for the voltage and current in each generation. So far, I have been able to solve for up to 17 generations including a capacitor model of the respiratory zone. In determining the transfer function (ratio of output pressure to input pressure), I have discovered multiple resonance peaks, the positions of which are highly dependent on the lung compliance. However, most of these peaks are out of the range of frequencies used in HFOV.

Dr. Frayer and I have recently decided to temporarily shelve this model, as it is far too oversimplified and tells us nothing about what occurs in the acinar units. Furthermore, we cannot assume that airflow is entirely convective throughout the lungs, as we have been doing. In both the respiratory zone and in the final generations of the dead space, diffusion plays a larger role than convection in gas transport. We have begun looking into modeling software (Fluent, in particular) to model these complex regions. I will write more about this in the next couple weeks, and hopefully I will have some totally radical and/or awesome pictures.

Tuesday, July 10, 2007

Neurological Surgery Rounds and more!

This past week, I attended neurological surgery ICU rounds and the daily residents case conference where the residents discuss how each case should be handled for that day. During rounds, the residents assess neurological function by asking questions like the date, the patient’s name, the city, and asking the patients to wiggle their toes and lift their arms up. While the patients do not appear outwardly impaired, some of the patients are unable to perform these tasks and a few vary in their performance quite visibly from day to day. It is interesting to get a sense of what the patients are like outside of the angiography suite, where I spend most of my time, and the process going into deciding whether patients should be sent there or not.

Location of carotid artery and comparison of artery built up with plaque versus normal artery. www.vascularweb.org

I observed several procedures this week, including a carotid artery stenting procedure which is still in clinical trials to evaluate the effectiveness of CAS versus the standard of care – carotid endarterectomy (CEA) in preventing recurrent strokes. Many of you are probably wondering the same thing I was – stenting is such a common procedure, why is it still in clinical trials for this particular procedure? To answer this, first I’ll describe the standard of care for treating blockages of the carotid artery – carotid endarterectomy, then describe the carotid artery stenting procedure.

These procedures treat the problem of plaque building up in your carotid artery. This is particularly sensitive because the carotid artery supplies the head and neck with blood. Plaque buildup eventually causes less blood flow to reach the brain and allows for the possibility of part of the plaque or clots formed on the plaque breaking off and causing a stroke in your brain.

Image showing carotid endarterectomy, surgical treatment of carotid stenosis. www.vascularweb.org

The surgical treatment and standard of care for treating carotid plaque buildup – carotid endarterectomy involves temporarily blocking flow through the carotid artery and surgically removing the plaque from the inside lining of the artery.

Carotid artery stenting involves performing angiography to visualize the blocked artery. A balloon can then used to expand the artery and a stent is used to hold open the artery. Follow-up angiography is performed after stenting to assess the success of the procedure.

In the past there had been problems with the plaque breaking loose during the stenting procedure and causing a stroke upstream in the brain. As a result, a new system was developed that includes a device to catch plaque traveling upstream. In particular, in this study and as part of the CREST (Carotid Revascularization Endarterectomy versus Stenting Trial) study, Dr. Gobin is using the ACCULINK Carotid Stent System which includes a protection system that opens up to protect upstream arteries from embolic material, while still allowing blood to flow through.
Image showing carotid artery stenting. www.vascularweb.org

Monday, July 9, 2007

Laparoscopic donor nephrectomy

Laparoscopic donor nephrectomy refers to a minimally invasive procedure to remove a kidney from the donor. I have observed 2 living donor kidney donations in which laparoscopic donor nephrectomy was performed concomitantly with the actual kidney transplant. The laparoscopic part was actually very cool 1) because of its semi-r0botic nature and 2) it is projected on the screen so I didn't have to strain to see it.

This procedure is performed through two or three 1/2 - inch puncture sites. With the patient under general anesthesia during the procedure, the surgeons make 1/2 - inch incisions. Laparoscope containing a video camera is introduced through one port, with its image projected onto a monitor so that the surgeons can see and control activity through the abdomen. Carbon dioxide is introduced to inflate the abdominal cavity(insufflation) to provide working space. The other 2 ports accommodate devices to perform the actual procedure such as staple gun. Once the organ(kidney) has been excised, a 2 -inch incision under the bellybutton is made to provide a path for extraction of the donated kidney. It is really neat to watch. Somehow the surgeon manages to slide in a plastic bag onto which the kidney is placed before it is taken out through the 2-inch incision. This is whole procedure is done by observing projected images on the monitor.


a) Excision of renal vein from www.laparoscopic.com

This state-of-art procedure, as compared to open kidney donation, has several advantages. First and most importantly, the smaller incision shortens postoperative hospital stay (usually two days), minimizes post-op discomfort, has less risk of hernia and scar formation, is more cosmetic and speeds up the patient's complete return to normal activity (2 wks compared to 6 for open surgery). Second, t
his procedure is much quicker than the old 12-inch long muscle splitting incision; it is a 2-hr operation. Lastly, the procedure doesn't require the detachment of the diaphragm, and therefore minimizes the risks of infectious disease such as pneumonia.

I thought this procedure was a textbook example of how technology has revolutionalized medicine. Watching the surgeon maneuver around several organs( intestines and and the spleen) to reach the kidney was not only astounding but also gave me a great appreciation of Anatomy and its mastery. All that said, it nonetheless left with me one lasting distaste. When the scope is inserted into the belly, the stomach and the organs become vividly visible including the yellow, disgusting fatty adipose around the stomach lining.. It is GROSS!

Dickson!


Again, With Feeling

"I hate this."

These are the words of Dr. Weinsaft as he describes the particularly "tedious," "eye numbing" job of evaluating basic cardiac function via cineMRI and phase contrast MR techniques. After observing him perform a few of these measurements, I'm not sure if I disagree. Perhaps I should explain...

A cineMRI is a variation of magnetic resonance imaging (MRI) that sacrifices spatial resolution for temporal resolution. Traditionally, MRI focuses heavily on spatial resolution, producing highly detailed images through scan times of several seconds or even minutes. In cineMRI, images are rapidly acquired at a relatively low resolution and are assembled into an image sequence resembling a smooth flip book movie. A cinema, perhaps? The resulting movie clearly shows heart motion and, when properly positioned, can provide insight into the operation and function of the rapidly moving bodies, such as the heart.



A typical cineMRI sequence (obtained from www.isi.uu.nl)

The above is a "short axis" view of the heart, a perspective that clearly shows the right and left ventricles. Given the movie-like qualities of the image sequence, it is then possible to evaluate the volume of the ventricles through an entire cardiac cycle and from this, we can obtain mountains of useful measures. End Diastolic Volume (EDV), End Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), Cardiac Output (CO), the wealth of information is endless. But first....how?

Manual labor.

"Sigh," says Dr. Weinsaft.

This is done by meticulously tracing each of the frames manually, separating structures of interest by dropping hundreds of points via a steady hand and a trusty mouse. Such an exercise takes an experienced cardiologist over ten minutes. For me? Coupled with a finicky software package that heartlessly punishes mistakes by totally erasing existing work? Better section off an afternoon.


A single "segmented" frame (obtained from www.isi.uu.nl)

Upon repeating the process 30 or more times for each short axis slice, we obtain something like this:

A fully segmented image sequence (obtained from www.isi.uu.nl)

From here, the software package takes over, automatically calculating the relevant statistics from the segments. After copying down this information, Dr. Weinsaft begins his clinical reading.

The problem? Segmenting the heart manually is a time-consuming, monotonous affair that can often take more time than the reading itself. If only there was an automated method...

Fortunately, there is. An algorithm, developed by Noel Codella, of Cornell University, is capable of segmenting the inner region of the left ventricle, with future development aimed towards fully segmenting all regions of interest in the heart. In addition to being far more consistent in segmentation decisions (an admittedly subjective judgement call at times when done manually), Noel's algorithm is fast. The time required for evaluating a single case is 1/20th that of a manual evaluation by an experienced observer.

Unfortunately, the algorithm is still being fully evaluated and its results cannot be fully relied on as of now. My job, for the remainder of the summer, is to segment as many cases as possible, using both the automatic algorithm and manual segmentation. By quantifying the agreement between the two methods and by measuring useful statistics such as the amount of time saved using automation, it is hoped that we'll provide compelling evidence as to the accuracy and speed of computer-assisted evaluations.

This convergence between computerized automation and "traditional" human evaluation is one of the aspects of engineering that interest me the most. A cardiologist shouldn't have to spend more time connecting-the-dots than than he does in rendering a diagnosis. An experienced clinician with decades of schooling and experienced shouldn't be limited by how quickly they can perform an activity we learned in pre-school. And with a little more work from engineers, hopefully they won't have to.

My Research

Since my doctor was out on vacation this week, I was mainly able to research on my projection –a five decade review on esophageal atresia with and without trachioesophageal fistula.

Esophageal atresia (EA)/trachioesophageal fistula (TEF) is a congenital disease that involves a failure of the development of the esophagus. Normally the esophagus (connects the mouth to the stomach) and trachea (connects the mouth to the lungs) are separate pipes; but in EA/TEF these pipes are underdeveloped and/or connecting. The failure in separating is pinpointed to the fourth fetal week when the trachea and esophagus should start to divide.

About 85% of all EA/TEF cases have an esophagus that ends in a blind pouch and a trachea that connects to the proximal portion of the esophagus. This part is called esophageal atresia. Trachioesophageal fistula is defined when the distal esophagus (one coming from the stomach) connects to the distal trachea. There are many other types of EA/TEF, some with only EA and no fistulas. Some cases involve EA with both proximal and distal TEFs -meaning the lower esophagus connects to both the upper and lower trachea. Other cases may involve an EA with proximal TEF but no distal TEF.

aafp.org


The first reported case was reported in 1670 but a successful intervention did not occur until 1941. After the successful operation, the mortality rate for EA/TEF decreased exponentially. The risk factors for survival previous to 1995 and after 1941 were mainly weight and other complications. Now, after 1995, weight is no longer an issue as ICU, surgical and anesthetic techniques have advanced greatly. Cardiac and other complications remain an issue as it sometimes prevents emergency surgical intervention; cardiac complications arise in 30% of all cases.

My project involves statistical analysis on all the cases of EA/TEF in the New York Presbyterian hospital during five decades. The decades are divided into three eras -1960 to 1974, 1975 to 1995 and 1995 to 2007. We are interested in seeing if there is a statistical difference in mortality, morbidity, and other complication rates between eras. We also plan to perform follow up calls to patients from different eras to see what post operative care is still needed many years later.

From what I’ve calculated, there is no statistical significance in terms of mortality, morbidity, complication between eras despite improvement in patient care, neonatal intensive care advances and surgical advances. The data set could possibly be bias and I’m looking into this factor in the upcoming weeks.

Sunday, July 8, 2007

Modern Medicine

Vascular Surgery
Week 3

This week I’ll outline two procedures I saw that bookend modern medicine: the guillotine amputation and the arteriovenous fistula.

The Guillotine Amputation

http://www.steinergraphics.com/

This is a guillotine amputation. Like its name implies, this type of amputation is performed by cutting linearly through, and perpendicular to, the long axis of a limb. It’s used in emergency situations for quick removal of malignant infection. It’s also called a “flapless” amputation because no tissue is left to cover the stump. The wound is dressed but left open to avoid new infection until a proper amputation can be performed when the patient is more stable. This picture shows a transtibial amputation with the distal tibia and fibula in the stump end.

I got a first-hand view of this amputation as I held the patient’s stump while the surgeon wrapped the wound. Needless to say, this condition is as painful as it looks. This procedure is the same one that was used on the battlefields of the American Civil War nearly 150 years ago.

The Arteriovenous Fistula

http://www.vascularweb.org/

As I mentioned previously, the AV fistula is the joining of an artery to a vein. This has the effect of increasing blood flow to the vein and can be felt as a vibration or “thrill” when the anastamosis under the skin is palpated (think of a cat purring). In response to the increased blood flow, the vein stretches and remodels to become stronger. This is called maturing, and after 3-6 months the vein will increase in size and look like a cord under the skin. This is an important procedure for dialysis patients—once mature, the fistula is strong enough to facilitate the insertion of the large needles needed for dialysis. This was a fantastic procedure to see—the surgeon opens the arm and literally sews an artery to a vein.

The guillotine amputation is crude and primitive and the AV fistula is complex and requires significant technological advancement, but both techniques currently classify as modern medicine.

or maybe scary statisticians?!

So last time I wrote about three large scale lung cancer screening trials that concluded unanimously x-ray screening for lung cancer does not decrease disease specific mortality. Now, the question becomes what would be the right parameters in making conclusions from a diagnostic clinical trial and what factors shall be taken into account when evaluating the results of these studies?
An article published in 1999 by Dr. Henschke concluded that CT screening for lung cancer is more efficient than chest x-ray since in a small observational study she observed that 85% for cancers are missed using CXR. This motivated the NCI to put together what is considered to be the most expensive RCT, the National Lung Screening Trial (NLST) where they've screened nearly 50,000 people in two arms (CT vs. x-ray) for three years and then they follow up patients for 7-8 years. The final measure of screening successfulness is the mortality rate and the final results are supposed to come out in 2008-2009.
One of the issues associated with setting up an RCT is that it costs so much. You need to have a large population in order to avoid statistical biases and population heterogeneity. So far it is estimated that the NLST has cost $250,000,000, and it seems this figure could be doubled (or maybe it is already doubled). Because the study is really costly, screening takes place in 2-3 years because the lengthier the screening, the more costly the study will be. An important question here is that will a 2-3 year screening necessarily decrease the "mortality"? An answer to this involves complicated statistical modeling, but it has been estimated that if NLST or previous RCTs screened for 18 years, then the decrease in mortality would have been significant between the control and screened arm. Now, no one in the right mind will screen 50,000 people for 18 years because the country may very well go bankrupt. So instead these trials cut on the number of years of screening and no wonder they do not detect a significant decrease in mortality. After you initiate screening, you have to take into account how long it takes for effects to be observable, specially since in the first few years the mortality would not be different because when you start the screening there are already a lot of patients with late stage lung cancer that will die.
More on the other issues later....