Saturday, July 14, 2007

A little off topic!

I know I promised to continue writing about the issues around clinical trials, but I have also been observing some interesting radiological procedures and I would like to share them with everyone.
First I am going to talk a little about lung anatomy and then proceed to explain what could happen when you perform a lung biopsy. Imagine you push your fist into an inflated balloon. What happens is that you have one layer of balloon at immediate contact with your fist, and the rest of the balloon at immediate contact with the air outside. Between the two layers is the air "inside the baloon". The situation of the fist is analogous to how your lung is located inside the thoracic cavity. The balloon is called the pleura. Its layer in contact with the lung is called the visceral pleura, and the other one in contact with the thoracic cavity is called the parietal pleura. Between these two layers exists the pleural fluid which is what keeps the lungs inflated due to its surface tension. So now you may guess what happens if a needle gets into the pleural space. Air can enter this area (called pneumothorax) and the patient may very well get a collapsed lung. When a radiologist detect a suspisious nodule on a lung CT, and s/he thinks a biopsy is necessary, a fine needle will be inserted between the ribs, pass the pleural space and ultimately reach the nodule. In 25% of the cases the patient will get a pneumothorax, but 98% of them recover on their own. Meaning that the pleura recover on its own and lungs will be inflated (Note: the collapse is not due to the needle reaching the lung parenchyma itself, but the pleural space). However, in the remaining 2% (which includes people who are either very old or have other lung complications such as emphysema) the radiologist has to insert a tube in the lung and pump the lung back up. The tube is small, with a daimeter close to a spagetti nuddle.

Switching gears, I want to talk about some exciting interventional radiology, which is basically collecting images by insering catheders inside arteries and veins while patient is alert (minimally invasive with local anesthesia). Among the cases that I have observed, I think the most interesting one was IVC filter placement. IVC stands for inferior vena cava which is the major vein that carry de-oxygenated blood from legs and lower body to the right ventricle of heart. The filter is a little guy that looks like an umbrella without the cloth on top ofcourse and it is placed inside the vein to prevent entry of plaques and clots into the right ventricle and further blockage of pulmonary arteries which carry de-oxygenated blood to the lungs for oxygen exchange. The filter is shown bellow. The way the procedure works is that the interventional radiologist attach the hook of the filter to the catheder, and then insert the catheder in the femoral vein. There is sheath inside the filter that once pulled, makes it open up like an umbrella. When they inject the contrast and make sure that the filter is at the right place, they pull the cathder and sheath and make sure the filter is in place. It is interesting to know that you only feel pain on the surface of your skin and surface of the abdominal cavity. That's why if they place a catheder inside your veins and arteries you probably won't feel anything!

Friday, July 13, 2007

T1 Mapping

Thanks to Thanh for taking his time to explain to me what the purpose of the whole t1 mapping business, now I finally understand what the project is about and why we need it. Unlike CT imaging, in which the intensity can be described quantitatively as the attenuation of the x-ray source in Housefield unit, with conventional MR imaging techniques, the signal is relative, which differs from one image experiment to another and therefore cannot used for direct signal quantification.

The basic principle behind MRI is that each hydrogen protons (spin) in a system (such as hydrogen of water molecules in our body) will give rise to nuclear magnetization, all aligned in the same direction under an external magnetic field. If RF radiation at a specific frequency is applied to the proton magnetization, it will be perturbed from its original state. The system will then relax back to its equilibrium state. It is observed that this recovery process of the magnetization follows an exponential form and the rate at which the magnetization recovers is described by a time constant T1. MRI images intensity is proportional to this magnetization. Different tissues have different T1 values, so when we take an MRI image at a specific time after the RF pulse, the magnetization of different tissues will be different due to different recovery rate, hence creates contrast in an image. This is typically called a T1-weighted image since contrast is based on the difference of the T1 values.

The images above shows a conventional MRI image of the brain (T1-weighted) on the left and a T1 mapping of the same brain on the right. Images on the left will have different intensity values from one imaging experiment to another. However the T1 mapping on the right will be always the same for the same subject.

Although the intensity of an image is relative, but the T1 value creates this different in intensity is a quantitative characteristic of tissues and does not change from experiment to experiment. One way to obtain the T1 value is to record data points on the magnetization recovery process at each location. This can be done by taking MRI images at different times after the RF excitation. Then we fit all the data point at a specific location to an exponential model describing the signal recovery process as shown in the figure below. The data points are the intensity values of each image at the same location taken at different time after RF excitation.

Then, from this model we can obtain T1 value for a specific location. Since this involves a lot of calculation, it is only possible to calculate on specific points or region of interest (i.e the myocardium). My goal is to develop an imaging analysis tool, so the researchers can interactively select the location on the image they want and then it will perform all the necessary analysis automatically.

belated blog

Although it’s the fourth week, I still clearly remember the first day in the operating room.

One of the nurses warned me before I entered OR that some of the operations would be very gory. She also tried to make sure that I wouldn’t faint during observing and become the next patient in the emergency room. Another warning was do NOT touch anything, which was also what I wanted.

Actually, this was not the first time I was in the OR. But all the previous experiences are somewhat unpleasant because usually I was the one on the operating table. So I either had no mood to look around or I couldn’t remember anything after anesthesia. OR is more crowded than I expected. For a colorectal case, there are one major doctor, 2 to 3 residents, 2 to 3 nurses, and an anesthetist. If add some observers like me, there are almost 10 people surrounding the surgical table. Usually I stand next to the anesthetist, who sits at the head end of the surgical table and records the physiological signals. For some operations on the anal verge area, this is not the best place to observe, but for most operation on the abdomen, or laparoscopy, this is good enough to capture doctors’ every movement.

After the first day, I strongly feet that I need to build up some muscles on my leg if I want to be a doctor. A case can be last for several hours and doctors keep standing. Considering they also keep making decision in their mind, and cutting, stitching, I really appreciate their great effort for saving people’s life and enhancing patients’ quality of life.

Complications of surgery

Over the past few weeks, I’ve been on pre and post operative rounds as well as observing surgeries. The progression of the patient goes from diagnosis, tests, operation, to recovery. They’ll keep the patient over night to a few weeks to make sure they are stable and healthy to go home. But upon further reading case studies from my research and other operations, the patient is not always fixed and healthy many months or years later.

I’ve been reading case studies for esophageal atresia (EA) and tracheoesophageal fistula (TEF) and their post operative complications. I have also been comparing different eras to see if surgical techniques have improved in the last decade to minimize post operative complications.

In EA/TEF there are many complications after surgery. One are strictures which means a the intestinal tubes are narrowing. Another are anastomotic leaks when two tubes need are stitched together. Doctors may also miss a fistula which means the connection to the esophagus from the trachea remains. This of course makes it impossible for feeding and breathing to take place.

a stricture

Of these complications, I’ve found that surgical techniques over the past decade have not improved the overall rate for these complications to a significant value of 0.05. Even with the invention of vicryl sutures which can hold tension for several weeks and floseal which helps build a matrix over the tubes preventing leaks, post operative complications are still high. Many of these are easy fixes; for example, 1-2 dilations will help solve strictures.

Many of these doctors are trying to find better methods to perform these surgeries. They are also hoping for better technology to make their lives easier as well as preventing patient complications. I guess that’s where we come in.

Thursday, July 12, 2007

Nuclear Stress Test

Patients that feel chest pain or any sort of discomfort that might be related to the heart are usually told to get a nuclear stress test done. There are a few different ways of performing a nuclear test, but in the end, they all asses the same thing - the distribution of blood in the heart. The test is not always necessary since there are apparently other ways of assessing a problem in the heart. For example, if a patient complains about chest pain, a doctor can take an EKG, notice a drop in the ST segments and then deduce that the patient probably had a heart attack. One could also underscore this claim with a blood test - looking for increased levels of troponin. The list goes on….

Fortunately, a nuclear stress test offers the clinician a quick way of assessing common problems in the heart. As a result, most patients with heart related problems will most likely have one done. The idea is simple; inject the patient with a radioactive isotope, thalium201, which gets concentrated in the tissues of the heart receiving blood supply. Then use a gamma scintillator camera to read the signal from different angles of the body. In the end, the doctor receives a low spatial resolution image that depicts portions of the heart that receive less blood as dark spots while others that receive a lot of blood as bright spots (figure).

Unfortunately, the signal detected by the gamma scintillator camera is not calibrated to an absolute scale. In fact, since the signal is relative, the doctor cannot say much if he observes dark spots in regions of the heart where one would intuitively expect bright spots. Evidently, one can claim that the dark spots are due to below normal levels of perfusion while someone else can claim that the dark spots are normal since the bright spots are due to above normal levels of perfusion. Partially for this reason, after obtaining images of the heart at rest, the patient is told to exercise, so that the blood supply can be maximized at all portions of the heart. For patient’s that can’t exercise, the alternative is to be injected with a vasodilator (dipyridamole (Persantine), adenosine (Adenoscan), or dobutimine). Shortly following, the patient is injected with another isotope, Technetium -99m pyrophosphate or Mibi, in the case of a dual isotope scan. Each isotope peaks at a different energy level (in keV) so the two are easily filtered and thereby differentiated. In terms of differentiating between the two sets of images (one at rest and the other under stress), one must rely on a little nuclear physics. The radiation emitted by mibi is usually stronger and scatters less; thus, the spatial resolution of the set of images of the heart under stress is usually better. Furthermore, mibi is filtered by the body differently form thalium. So another way of differentiating between the two sets of images is to see where they are concentrated in the human body; typically either the kidneys or the gall bladder.

Anyway, the images of the heart at rest and under stress make the claims about the distribution of blood in the heart more plausible. They can help the doctor identify coronary artery disease or other potential anomalies in the heart. Yet, this imaging modality can use some improvement. I used to wonder why it was not possible to integrate the resolution of CT with the perfusion identifying capabilities of a NST. Fortunately, I recently discovered that there has been some work underway on this and one might see a device/software that does just that in the near future.

My time in the MICU

This past week I spent most of my time in the Medical Intensive Care Unit (MICU). While I was there, I got to do rounds for the first time (everywhere I have been thus far don't generally do rounds), which was interesting. I got to hear residents, medical students and attendings discuss patients and treatment plans, and was amazed at how much they can have memorized at any given time.
While there I also got to here about how a diabetic who comes into ER for volume depletion and hyperglycemia (to much sugar in the blood) can be brought from the brink of death (seriously) to outpatient status in less than a day. This seemed to make the attending very happy/excited. He explained that even though one would think that giving insulin would be the first thing to do (to push the sugar into the cells), since the patient was volume depleted, and the volume depletion is part of the source of the hyperglycimia (remember reduced volume=increased concentration), it would be one of the worst things you do (since the sugar will take water with it as the sugar moves into the cells, making the volume depletion worse/fatal). He said first you stabilize the airways and then give saline (to up the circulatory volume), and THEN you give insulin. BAM! And the guy can go home the next day.

Also while I was there I got to witness an interesting procedure that serves as the solution to a problem I had only ever seen in radiology CT images prior to that day. The problem: plueral effusions (liquid in the chest cavity). The solution: Poke a hole and drain it (simple, no?). See, normally the lungs fill the chest cavity with the surface of the lung literally attached to the surface of the thoracic cavity (left image, However sometimes fluid builds up in the cavity and pushes on the lung which can make it difficult to breathe. Especially when the amount approaches a full litre. The schematic (bottom left image, and how one looks in a CT scan can be seen (bottom right image, can be seen below. But basically, the solution is literally poke a hole and suction out the fluid by hand (kind of like a bilge pump).

Finally, I heard an interesting talk on two case studies from patients in the MICU. One of the patients has cystic fibrosis, and the discussion went on to discuss the epidemiology of the disease. Since I want to possibly go into public policy after getting my degree, epidemiology is an interesting topic for me. Briefly, cystic fibrosis is a genetic (autosomal recessive) disease that leads to progressive disability and eventually death. The genetic mutation effects the creation of chloride channels, effectively creating channels with a lower conductance. This
The interesting part of the discussion is when the attending leading the talk began discussing how genotypes persist for reasons. An example of this logic is how sickle cell anemia primarly affects populations at risk for malaria and the former grants a certain level of resistance to the later. So the attending asked

"What disease's effects would be reduced by the reduction of chloride channel conductance?"
Here are the hints:
  1. Third World Disease
  2. Remember: Chloride channel conductance reduction => decreased fluid loss
  3. GI Tract
Answer: Cholera! If you can't lose the ions, you can't lose the water, and therefore you can't dehydrate.

Now that was a cool excercise in logic.

Wednesday, July 11, 2007

Lungs and High Frequency Ventilation: The Perfect Blend of Man and Machine

Today, I will discuss the project I have been working on for the past few weeks. It involves lungs. Please enjoy reading about lungs, because I truly enjoyed writing about them.

A brief introduction to lungs
What do dinosaurs, ninjas, and The Fonz all have in common? If you said “they are all cool” then you are correct. Another acceptable (and more obvious) answer is that they all have lungs.
Why is this more obvious? If they didn’t have lungs, they would be unable breathe, which would be most uncool. Imagine, for example, that The Fonz couldn’t breath. What would happen? For one thing, he wouldn’t be able to say “Ayyyyy”, the very line that earned him an honorary PhD in coolness. I'm sure you now realize why the lungs are so important. But what do they do?

The lungs are just one part of the respiratory system. The primary goal of the respiratory system is to deliver oxygen to the blood and to remove carbon dioxide from the blood to the environment. The respiratory system consists of the controller (signal generator in the brain), the pump (muscles such as the diaphragm and intercostals), and the site of gas flow and exchange (the lungs). Respiration occurs by pumping fresh, oxygenated air in through the trachea and delivering it to the alveoli, which are air sacs that form a blood/gas interface with capillaries to perform the gas exchange. By maintaining concentration gradients, oxygen diffuses through the alveoli to the blood while the carbon dioxide diffuses from the blood to the alveoli.

The trachea splits (bifurcates) in two directions to form airways in the left and right lungs. These airways continue to bifurcate for 15 more generations and become smaller each time. The branches of generation 16 are known as the terminal bronchioles. Up until this point the gas is transported by convection and no gas exchange occurs. Consequently, the first 16 generations are termed “anatomic dead space,” which sounds like something out of a science fiction movie. The next 7 generations are known as the “respiratory zone,” which consists of airways lined with alveoli that end in alveolar sacs (acini). The average person has 300 million alveoli, and each is of about 100 micrometers in diameter. They contribute collectively to a surface area of up to 100 square meters. This huge surface area is ideal for fast gas diffusion, since diffusion is proportional to the surface area.

High frequency oscillatory ventilation (HFOV)
Sometimes, for a variety of causes, the lungs will fail. At this point, the person will have two options: die, or be temporarily transformed into a cyborg. Those who choose the latter are placed on a mechanical ventilator, which allows control over the pressure, volume, and rate of air delivered to the alveoli. The volume of air in a single breath is known as the "tidal volume." Conventional ventilators can have a harmful effect on infants because normal tidal volumes may overstretch the lungs. HFOV is a highly effective alternative, which uses high frequency (10 - 15 breaths per second) and smaller tidal volumes, which reduces the risk of lung damage.

It is important to note that nobody (including The Fonz) really understands why HFOV works so well. Specifically, why do small tidal volumes at high frequencies ventilate so well? How strongly does the frequency depend on the geometry of the lung? We would like to know the answers, so we are in the process of constructing a mathematical model. I am currently continuing the work of the last two generations of immersion participants.

The simplest approach is to consider three parameters of airflow in the lungs: resistance, inertance, and compliance. Resistance arises from the geometry of the airway and the viscosity of air, and is essentially the proportionality constant between pressure and flowrate. Inertance is related to the force needed to accelerate air. In both a static model and low frequency spontaneous breathing, this term is absent. However, when you have high frequency oscillations in pressure (as in the case of HFOV), the air constantly has to be accelerated and decelerated. Consequently, inertance will play a much larger role. Finally, compliance is related to the “stretchiness” of the airway. It is defined as the change in volume per change in pressure. High compliance means stretchy, while low compliance means rigid. These three properties are analogous to resistance, inductance, and capacitance in electronic circuits. As a result, much insight regarding frequency response has been gained from studying RLC circuits.

In the past, Dr. Frayer and his students have attempted to model most of the major airways (down to generation 12) by constructing branched RLC circuits and solving for the voltage and current in each generation. So far, I have been able to solve for up to 17 generations including a capacitor model of the respiratory zone. In determining the transfer function (ratio of output pressure to input pressure), I have discovered multiple resonance peaks, the positions of which are highly dependent on the lung compliance. However, most of these peaks are out of the range of frequencies used in HFOV.

Dr. Frayer and I have recently decided to temporarily shelve this model, as it is far too oversimplified and tells us nothing about what occurs in the acinar units. Furthermore, we cannot assume that airflow is entirely convective throughout the lungs, as we have been doing. In both the respiratory zone and in the final generations of the dead space, diffusion plays a larger role than convection in gas transport. We have begun looking into modeling software (Fluent, in particular) to model these complex regions. I will write more about this in the next couple weeks, and hopefully I will have some totally radical and/or awesome pictures.

Tuesday, July 10, 2007

Neurological Surgery Rounds and more!

This past week, I attended neurological surgery ICU rounds and the daily residents case conference where the residents discuss how each case should be handled for that day. During rounds, the residents assess neurological function by asking questions like the date, the patient’s name, the city, and asking the patients to wiggle their toes and lift their arms up. While the patients do not appear outwardly impaired, some of the patients are unable to perform these tasks and a few vary in their performance quite visibly from day to day. It is interesting to get a sense of what the patients are like outside of the angiography suite, where I spend most of my time, and the process going into deciding whether patients should be sent there or not.

Location of carotid artery and comparison of artery built up with plaque versus normal artery.

I observed several procedures this week, including a carotid artery stenting procedure which is still in clinical trials to evaluate the effectiveness of CAS versus the standard of care – carotid endarterectomy (CEA) in preventing recurrent strokes. Many of you are probably wondering the same thing I was – stenting is such a common procedure, why is it still in clinical trials for this particular procedure? To answer this, first I’ll describe the standard of care for treating blockages of the carotid artery – carotid endarterectomy, then describe the carotid artery stenting procedure.

These procedures treat the problem of plaque building up in your carotid artery. This is particularly sensitive because the carotid artery supplies the head and neck with blood. Plaque buildup eventually causes less blood flow to reach the brain and allows for the possibility of part of the plaque or clots formed on the plaque breaking off and causing a stroke in your brain.

Image showing carotid endarterectomy, surgical treatment of carotid stenosis.

The surgical treatment and standard of care for treating carotid plaque buildup – carotid endarterectomy involves temporarily blocking flow through the carotid artery and surgically removing the plaque from the inside lining of the artery.

Carotid artery stenting involves performing angiography to visualize the blocked artery. A balloon can then used to expand the artery and a stent is used to hold open the artery. Follow-up angiography is performed after stenting to assess the success of the procedure.

In the past there had been problems with the plaque breaking loose during the stenting procedure and causing a stroke upstream in the brain. As a result, a new system was developed that includes a device to catch plaque traveling upstream. In particular, in this study and as part of the CREST (Carotid Revascularization Endarterectomy versus Stenting Trial) study, Dr. Gobin is using the ACCULINK Carotid Stent System which includes a protection system that opens up to protect upstream arteries from embolic material, while still allowing blood to flow through.
Image showing carotid artery stenting.

Monday, July 9, 2007

Laparoscopic donor nephrectomy

Laparoscopic donor nephrectomy refers to a minimally invasive procedure to remove a kidney from the donor. I have observed 2 living donor kidney donations in which laparoscopic donor nephrectomy was performed concomitantly with the actual kidney transplant. The laparoscopic part was actually very cool 1) because of its semi-r0botic nature and 2) it is projected on the screen so I didn't have to strain to see it.

This procedure is performed through two or three 1/2 - inch puncture sites. With the patient under general anesthesia during the procedure, the surgeons make 1/2 - inch incisions. Laparoscope containing a video camera is introduced through one port, with its image projected onto a monitor so that the surgeons can see and control activity through the abdomen. Carbon dioxide is introduced to inflate the abdominal cavity(insufflation) to provide working space. The other 2 ports accommodate devices to perform the actual procedure such as staple gun. Once the organ(kidney) has been excised, a 2 -inch incision under the bellybutton is made to provide a path for extraction of the donated kidney. It is really neat to watch. Somehow the surgeon manages to slide in a plastic bag onto which the kidney is placed before it is taken out through the 2-inch incision. This is whole procedure is done by observing projected images on the monitor.

a) Excision of renal vein from

This state-of-art procedure, as compared to open kidney donation, has several advantages. First and most importantly, the smaller incision shortens postoperative hospital stay (usually two days), minimizes post-op discomfort, has less risk of hernia and scar formation, is more cosmetic and speeds up the patient's complete return to normal activity (2 wks compared to 6 for open surgery). Second, t
his procedure is much quicker than the old 12-inch long muscle splitting incision; it is a 2-hr operation. Lastly, the procedure doesn't require the detachment of the diaphragm, and therefore minimizes the risks of infectious disease such as pneumonia.

I thought this procedure was a textbook example of how technology has revolutionalized medicine. Watching the surgeon maneuver around several organs( intestines and and the spleen) to reach the kidney was not only astounding but also gave me a great appreciation of Anatomy and its mastery. All that said, it nonetheless left with me one lasting distaste. When the scope is inserted into the belly, the stomach and the organs become vividly visible including the yellow, disgusting fatty adipose around the stomach lining.. It is GROSS!


Again, With Feeling

"I hate this."

These are the words of Dr. Weinsaft as he describes the particularly "tedious," "eye numbing" job of evaluating basic cardiac function via cineMRI and phase contrast MR techniques. After observing him perform a few of these measurements, I'm not sure if I disagree. Perhaps I should explain...

A cineMRI is a variation of magnetic resonance imaging (MRI) that sacrifices spatial resolution for temporal resolution. Traditionally, MRI focuses heavily on spatial resolution, producing highly detailed images through scan times of several seconds or even minutes. In cineMRI, images are rapidly acquired at a relatively low resolution and are assembled into an image sequence resembling a smooth flip book movie. A cinema, perhaps? The resulting movie clearly shows heart motion and, when properly positioned, can provide insight into the operation and function of the rapidly moving bodies, such as the heart.

A typical cineMRI sequence (obtained from

The above is a "short axis" view of the heart, a perspective that clearly shows the right and left ventricles. Given the movie-like qualities of the image sequence, it is then possible to evaluate the volume of the ventricles through an entire cardiac cycle and from this, we can obtain mountains of useful measures. End Diastolic Volume (EDV), End Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), Cardiac Output (CO), the wealth of information is endless. But

Manual labor.

"Sigh," says Dr. Weinsaft.

This is done by meticulously tracing each of the frames manually, separating structures of interest by dropping hundreds of points via a steady hand and a trusty mouse. Such an exercise takes an experienced cardiologist over ten minutes. For me? Coupled with a finicky software package that heartlessly punishes mistakes by totally erasing existing work? Better section off an afternoon.

A single "segmented" frame (obtained from

Upon repeating the process 30 or more times for each short axis slice, we obtain something like this:

A fully segmented image sequence (obtained from

From here, the software package takes over, automatically calculating the relevant statistics from the segments. After copying down this information, Dr. Weinsaft begins his clinical reading.

The problem? Segmenting the heart manually is a time-consuming, monotonous affair that can often take more time than the reading itself. If only there was an automated method...

Fortunately, there is. An algorithm, developed by Noel Codella, of Cornell University, is capable of segmenting the inner region of the left ventricle, with future development aimed towards fully segmenting all regions of interest in the heart. In addition to being far more consistent in segmentation decisions (an admittedly subjective judgement call at times when done manually), Noel's algorithm is fast. The time required for evaluating a single case is 1/20th that of a manual evaluation by an experienced observer.

Unfortunately, the algorithm is still being fully evaluated and its results cannot be fully relied on as of now. My job, for the remainder of the summer, is to segment as many cases as possible, using both the automatic algorithm and manual segmentation. By quantifying the agreement between the two methods and by measuring useful statistics such as the amount of time saved using automation, it is hoped that we'll provide compelling evidence as to the accuracy and speed of computer-assisted evaluations.

This convergence between computerized automation and "traditional" human evaluation is one of the aspects of engineering that interest me the most. A cardiologist shouldn't have to spend more time connecting-the-dots than than he does in rendering a diagnosis. An experienced clinician with decades of schooling and experienced shouldn't be limited by how quickly they can perform an activity we learned in pre-school. And with a little more work from engineers, hopefully they won't have to.

My Research

Since my doctor was out on vacation this week, I was mainly able to research on my projection –a five decade review on esophageal atresia with and without trachioesophageal fistula.

Esophageal atresia (EA)/trachioesophageal fistula (TEF) is a congenital disease that involves a failure of the development of the esophagus. Normally the esophagus (connects the mouth to the stomach) and trachea (connects the mouth to the lungs) are separate pipes; but in EA/TEF these pipes are underdeveloped and/or connecting. The failure in separating is pinpointed to the fourth fetal week when the trachea and esophagus should start to divide.

About 85% of all EA/TEF cases have an esophagus that ends in a blind pouch and a trachea that connects to the proximal portion of the esophagus. This part is called esophageal atresia. Trachioesophageal fistula is defined when the distal esophagus (one coming from the stomach) connects to the distal trachea. There are many other types of EA/TEF, some with only EA and no fistulas. Some cases involve EA with both proximal and distal TEFs -meaning the lower esophagus connects to both the upper and lower trachea. Other cases may involve an EA with proximal TEF but no distal TEF.

The first reported case was reported in 1670 but a successful intervention did not occur until 1941. After the successful operation, the mortality rate for EA/TEF decreased exponentially. The risk factors for survival previous to 1995 and after 1941 were mainly weight and other complications. Now, after 1995, weight is no longer an issue as ICU, surgical and anesthetic techniques have advanced greatly. Cardiac and other complications remain an issue as it sometimes prevents emergency surgical intervention; cardiac complications arise in 30% of all cases.

My project involves statistical analysis on all the cases of EA/TEF in the New York Presbyterian hospital during five decades. The decades are divided into three eras -1960 to 1974, 1975 to 1995 and 1995 to 2007. We are interested in seeing if there is a statistical difference in mortality, morbidity, and other complication rates between eras. We also plan to perform follow up calls to patients from different eras to see what post operative care is still needed many years later.

From what I’ve calculated, there is no statistical significance in terms of mortality, morbidity, complication between eras despite improvement in patient care, neonatal intensive care advances and surgical advances. The data set could possibly be bias and I’m looking into this factor in the upcoming weeks.

Sunday, July 8, 2007

Modern Medicine

Vascular Surgery
Week 3

This week I’ll outline two procedures I saw that bookend modern medicine: the guillotine amputation and the arteriovenous fistula.

The Guillotine Amputation

This is a guillotine amputation. Like its name implies, this type of amputation is performed by cutting linearly through, and perpendicular to, the long axis of a limb. It’s used in emergency situations for quick removal of malignant infection. It’s also called a “flapless” amputation because no tissue is left to cover the stump. The wound is dressed but left open to avoid new infection until a proper amputation can be performed when the patient is more stable. This picture shows a transtibial amputation with the distal tibia and fibula in the stump end.

I got a first-hand view of this amputation as I held the patient’s stump while the surgeon wrapped the wound. Needless to say, this condition is as painful as it looks. This procedure is the same one that was used on the battlefields of the American Civil War nearly 150 years ago.

The Arteriovenous Fistula

As I mentioned previously, the AV fistula is the joining of an artery to a vein. This has the effect of increasing blood flow to the vein and can be felt as a vibration or “thrill” when the anastamosis under the skin is palpated (think of a cat purring). In response to the increased blood flow, the vein stretches and remodels to become stronger. This is called maturing, and after 3-6 months the vein will increase in size and look like a cord under the skin. This is an important procedure for dialysis patients—once mature, the fistula is strong enough to facilitate the insertion of the large needles needed for dialysis. This was a fantastic procedure to see—the surgeon opens the arm and literally sews an artery to a vein.

The guillotine amputation is crude and primitive and the AV fistula is complex and requires significant technological advancement, but both techniques currently classify as modern medicine.

or maybe scary statisticians?!

So last time I wrote about three large scale lung cancer screening trials that concluded unanimously x-ray screening for lung cancer does not decrease disease specific mortality. Now, the question becomes what would be the right parameters in making conclusions from a diagnostic clinical trial and what factors shall be taken into account when evaluating the results of these studies?
An article published in 1999 by Dr. Henschke concluded that CT screening for lung cancer is more efficient than chest x-ray since in a small observational study she observed that 85% for cancers are missed using CXR. This motivated the NCI to put together what is considered to be the most expensive RCT, the National Lung Screening Trial (NLST) where they've screened nearly 50,000 people in two arms (CT vs. x-ray) for three years and then they follow up patients for 7-8 years. The final measure of screening successfulness is the mortality rate and the final results are supposed to come out in 2008-2009.
One of the issues associated with setting up an RCT is that it costs so much. You need to have a large population in order to avoid statistical biases and population heterogeneity. So far it is estimated that the NLST has cost $250,000,000, and it seems this figure could be doubled (or maybe it is already doubled). Because the study is really costly, screening takes place in 2-3 years because the lengthier the screening, the more costly the study will be. An important question here is that will a 2-3 year screening necessarily decrease the "mortality"? An answer to this involves complicated statistical modeling, but it has been estimated that if NLST or previous RCTs screened for 18 years, then the decrease in mortality would have been significant between the control and screened arm. Now, no one in the right mind will screen 50,000 people for 18 years because the country may very well go bankrupt. So instead these trials cut on the number of years of screening and no wonder they do not detect a significant decrease in mortality. After you initiate screening, you have to take into account how long it takes for effects to be observable, specially since in the first few years the mortality would not be different because when you start the screening there are already a lot of patients with late stage lung cancer that will die.
More on the other issues later....