Friday, July 29, 2016

Lab Internship: Week 4

Tuesday 7/26
I just discovered that I was not supposed to culture cells, according to the safety advisor for the building.  So technically, I was not doing something that I was supposed to do.  Victor will now have to prepare all of the cells needed for experimentation.  Victor is planning on doing an experiment testing how TNF (tumor necrosis factor) affects cell behavior.  TNF is a cytokine that regulates many cell functions, such as proliferation, apoptosis, coagulation, etc.  In this experiment, Victor explained that the TNF will excite the cells and may cause them to jump around and possibly out of the traps.  I prepared the cells in four devices with Hoechst stain so we can identify the cells and I introduced TNF into two of the four devices, yet the trapping efficiency was not 100% in all devices.  Victor said we will have to image specific regions of the channel to see how the TNF affected the cells.  We just need to see the cells, so the trapping efficiency is irrelevant.

Wednesday 7/27
Victor asked me to accompany him to the stock room to get some supplies.  I assumed the stock room was in the basement of the building, but it was really in KBT.  The stock room was reminiscent of a corner store, like a drug dealer's grocery store.  Walter White would probably love the stock room.  There are two on Yale's primary campus: one at KBT and one in the med school.  There is also one at West Campus, but that's far away (at like West Haven or something).

There was also a lab meeting where one of the labmates, Linda Fong, presented her progress on her project on how p38, a mitogen-activated protein kinase, affects HIV latency and apoptosis of HIV infected immune cells, which could lead to more effective forms of therapy.

I also began to learn ImageJ to analyze the images of the devices I took with Victor.  I need to learn how to use ImageJ and develop a macro to count the amount of cells in the image and differentiate between the cells.  CellProfiler is another program I can use, but I do not know how to use it.  I asked AndrĂ©s for help, but he was busy with his project.

Victor and I were about to image the devices with cells for the TNF experiment, but we found out that the RPMI evaporated in the incubator and the cells died as a result.  Many traps in some of the devices also disappeared.  Victor guessed that this occurred because there was not enough humidity in the incubator.  Tomorrow, we will begin repeating the experiment by preparing the devices.

I found a way to analyze an image and count the cells.  I just need to use the Color Threshold function to "select" the cells and analyze the particles with the Analyze Particles function.  I then get a table full of data with the measurements of each particle.  I already tested this method with images from the efficiency experiment.  The method was accurate.  I will use this method with my experiment to count the cells and further test the efficacy of this method.

Thursday 7/28
I prepared four flow-patterning devices for Laura and her experiment.  I cut them out into individual devices, then I cleaned them.  I also punched holes in the inlets and outlets (20 inlets, 20 outlets for each device).

I prepared four more devices for my experiment (take 2).  I used J65C 6.6 for my experiment, not 4.4 since 6.6 was thinner and less clumpy and dense.  I prepared two of the four devices as controls, and the other two as the manipulated group.  I will stimulate the cells in the manipulated group with TNF to see how TNF affects cell behavior.  According to Victor, the TNF will excite the cells in a way that will make the cells jump out of the traps.  I will leave the cells in the devices, and leave the devices in an incubator for 24 hours.  I will also put a humidifier (fancy term for water in a bowl) with the devices to ensure that the RPMI will not evaporate so the cells do not die.

Friday 7/29
Most of the cells from the devices all disappeared.  We do not know why.  It was perfectly fine when we incubated the device.  Maybe the humidifier loosened the bonding of the devices?  We checked the outlet and everything, but nothing was different--besides the fact the most of the cells were gone.  It's not like they planned a prison break or anything.  Cells can't do that--I think.  There were a few cells I could image in specific regions of the devices, so I imaged them.  The control devices were plain blue because of the Hoechst stain, but the TNF activated HIV in almost 100% of the cells, so that made for some pretty pictures.  It almost calmed me of the stress and shame of having almost another failed experiment.  Victor told me this happened all the time, which just made me feel bad for scientists everywhere.  Now, I have to understand my data and learn what it means.  It's likely that the TNF was an activator for transcription, allowing HIV to be transcribed and become active in the cell, but it could also be that TNF activated a specific MAPK (mitogen-activated protein kinase) pathway.  MAPK pathways are involved in many cellular activities, such as gene expression, proliferation, mitosis, apoptosis, etc.

The Transmission Electron Microscope

When compared to a light microscope, the transmission electron microscope (TEM) may seem similar in concept, but there are a few things that have been changed. The light source is replaced with an electron gun, and the glass lenses are replaced with electromagnetic lenses. The entire chamber through which the electrons pass is put under a vacuum, to avoid electron collisions with air molecules, and magnification variety is achieved through different currents in the electromagnetic lenses (as opposed to just moving the glass lenses in a light microscope).

The electron gun is usually composed of a heated tungsten filament, from which there is an electron cloud. This makes up the electron source, from which the electrons are pulled, accelerated, and then focused onto the sample. The sample has to be really thin (most are approx. 0.5 micrometers!) for the electrons to pass through and form an image. When the electron beam hits the sample, multiple things can happen. They (the electrons) can be absorbed, scattered, or even reflected; depending on the thickness of the sample and its composition. The image is formed on a fluorescent screen using the information gathered form the different actions of the electrons. This image can then be photographed for later use or documentation.

(see article link in "All You Wanted to Know About Electron Microscopy...But Didn't Dare to Ask!" post)

-Patricia Acorda

Lab Update: Week 4

7/25 Monday - Worked on poster, asking lab partner for feedback on conclusion and title sections. Completed these sections and started writing up a new Methods section. Added references to photos. Went home early because of a stomach ache.

7/26 Tuesday - Finished writing up Methods section and sent poster to Apple for feedback. Using feedback, reformatted Results section again, trying to remove bullet points and labeling sections. Made minor edits to the Methods section and sent back for feedback. Attended a presentation on the other lab members' projects and participated in the ImageJ quiz made by the teachers, winning first place.

7/27 Wednesday - Awaiting feedback from Apple. Went to Hillhouse to fill out the CRISP program survey.

-Patricia Acorda

Internship Week 4

I have finished converting the NVE code into an NVT code. The major changes that had to be made were the order of the subfunctions and the contents of the “movea” and “moveb” subfunctions. I sent it to my mentor for revision and copied the revised file into both of the Yale clusters that I have access to-- Omega and Grace. The revised file rearranged some of my original subfunctions, added in some outputs, and adjusted the velocity to 0. I was having trouble compiling the code in both clusters; every time I issued the command I received an error message informing me that the compilation had been aborted. I worked with my mentor and we found that there were complications in using the “scp” command to copy the code into the clusters. We copied and pasted in manually, as well as copying source files, but we only did this in Omega as opposed to both Omega and Grace.
I read up about scripts and refreshed on the vi editor for about an hour. My mentor created some scripts and I copied them into Omega under a directory that I created named “Scripts”. He explained the scripts to me line by line and I ran certain parts of them according to my immediate goal. For example, I put a pound sign in front of the lines of entire script except the loop that created 16 files under my “scratch” folder. After the loop, the script issued commands to open and close 16 files, so I copied a source file named “input.txt” into the space between the commands so that it would be copied into the 16 files. After this was done, I put pound signs in front of these lines because their job had already been executed. I then removed them from the loop that created a file named “ignition2.tasks”. Next, I put the pound signs back and removed one from the line that specified parameters such as the requested amount of time, the memory, the queue, etc. Then, I submitted the job. There were two other jobs already running.
When I returned the next day, the job was complete. I experimented with some commands on the cluster and attempted to edit some of the code so that it would produce outputs in a file.

Sunday, July 24, 2016

Internship Week 3

I read about constraint methods in constant temperature molecular dynamics for advanced computer simulation techniques. An enumerated method to fix the kinetic temperature is to rescale the velocities at each timestep to  the square root of the current kinetic temperature divided by the desired thermodynamic temperature. This rescaling solves the set of equations of motion. It is common for configurational properties to be determined in a simulation while the precise kinetic properties are added in later.
I have been working on converting the NVE code (where energy is constant) to an NVT code (where temperature is constant). I have many references such as a C++ coding book, a psuedo code, and an NVT code written in Fortran. I expect to have it done by tomorrow so that I am able to run the program successfully in a Yale cluster.

Friday, July 22, 2016

Lab Update: Week 3

7/18 Monday - Went over Apple's feedback independently. Made minor edits to poster, mostly worked on reading up on lanreotide for the weekly article.

7/19 Tuesday - Went over poster feedback with Apple. Made major changes to the poster. Went back and revised my image processing system, using heavier filters and using a variety to be able to show the different results on my poster. Using a 9.0 pixel radius Mean filter instead of a 3.0 pixel radius Mean filter and a 3.0 pixel radius Gaussian Blur instead of a 2.0 pixel radius Gaussian blur before using a Default threshold gave much smoother results.

7/20 Wednesday - Worked more on formatting the Results section of the poster, using the images from the process created yesterday. Got feedback from a friend, who suggested defining more of the ImageJ vocabulary.

7/21 Thursday - Attempted to use the same imaging process on other images, but ran across a problem with analyzing the binary image created. Added more to the Abstract and reformatted the poster. Experimented more with different filters, trying to find a way to get rid of more grayish artifacts that were interfering with some particle outlines. Wasn't able to find a way to separate the two. Still stuck on how to analyze some of the particles.

7/22 Friday - Experimented more with analyzing particles. Played around with estimated particle size to try to get proper outlines of the lareotide nanotubes. Worked on conclusion, methods, and reference sections of the poster. Implemented feedback from friend and added a bit more explanation on some vocabulary.

-Patricia Acorda

Thursday, July 21, 2016

Lab Internship: Week 3

Tuesday 7/19
Today, I did some cell culturing and split my cell groups in a ratio of 1:5 (1 mL of 1 mil cells:5 mL of entire solution (cells and RPMI medium)).  This was the 29th passage, or the 29th time these cells have been split.  I took out 7.5 mL of cells from J65C 6.6 and left 2 mL in container to grow.  I placed the 7.5 mL of cells into a 15 mL falcon tube to spin in the centrifuge to get a pellet of cells at the bottom of the tube to run through three devices.  Once I got a pellet, I removed most of the old medium and added 500 microliters of new RPMI into the tube.  I then took the entire solution out of the falcon tube and into two cell strainers to ensure the cells were separate and not in clumps.  I ran them through 3 devices I prepared earlier.  After a few minutes, I removed the leftover media and replaced it with Hoechst, a fluorescent staining agent that stains the DNA in the nucleus of cells.  I then placed the devices in a styrofoam box to block off all of the light since the Hoechst is light-sensitive.  After 20+ mins, I removed the Hoechst and replaced it with media so the cells can survive.  I then checked the devices under the microscope to ensure that cells were trapped and no more cells were flowing, otherwise it would interfere with the imaging process.  After, Victor brought me to the EVOS microscope, a widefield fluorescence microscope used for cell imaging.  Victor showed me how to use the device, and I took pictures of the cells trapped in the channels of the devices.  I used multiple filters to tell us different things about the cells.  For example, the GFP filter shows if there is HIV present in cells by activating the GFP reporter attached to the end of HIV strains and making the cells appear green.  The TxRed filter shows if there are proteins that stimulate the production of HIV and make the cells orange.  The DAPI filter makes the cells' nuclei appear blue since it reveals the Hoechst stain.  I took the images and used ImageJ to observe the pictures on my computer.

Wednesday 7/20
Uneventful day.  I did not do anything new, just made some more devices for Laura so she can do her experiment.  I did make two masters (not the device themselves, just casts) for future use.  I also punched out holes in four flow-patterning devices for Laura using a drill press in a lab on the first floor.  It took approximately 30 minutes to complete.  For the procedure on how I made the devices, refer to Lab Internship: Week 2.

I also read articles on cell signaling, which is what Laura is focusing her project on.

Laura informed me that she and Victor will not be in tomorrow, but I will still come in so I can pass some cells (cell culturing).  After that, I plan on reading for a while before heading home.

Thursday 7/21
I split J65C 4.4 and 6.6 1:5 successfully on my own.  I did not die, nor did the cells or station catch on fire, so I think it was an overall success, unless I missed something that I did not realize.  I warmed up my media, I made sure everything was clean and sterile, I did not spill the cells or kill them on accident, I did not break anything, I pipetted the solutions multiple times to get rid of clumps of cells, I only made a single bubble (which is pretty good), and I put in the proper amount of media (8 mL to go with the 2 mL of cells to make 10 mL of solution).  The solutions should be at 200,000 cells per mL now.  If I messed something up, Victor will notice when he splits the cells on Saturday.

Laura is not here, she is actually on vacation.  I do not have much else to do besides do some readings and extra work that I have.  

Lab Update: Week 2

7/12 Tuesday - The focus of our research project was switched from iron sucrose molecules to lanreotide nanotubes. We were given the nessecary information and supplementary articles by the lab members working on the project and given a chance to watch the lanreotide nanotubes being imaged on the transmission electron microscope (TEM), where we were also allowed to try using the TEM ourselves. We were assigned to work with the images from July 7, 2016 and given time to freely experiment with different ImageJ functions. I created my first imaging process theory, recording my process in Word. Though my imaging results weren't that great.

7/13 Wednesday - I did more independent research on the importance of lanreotide for the purpose of my poster, which I managed to organize and complete a little bit of. Continuing from Week 1, we learned more about using ImageJ and how to properly process the images given to us. We were given more advice on what different filters did to images and the reasoning behind using filters and auto thresholding techniques as well as the importance behind the image histograms.

7/14 Thursday - I did more poster organization, beginning to actually create it in powerpoint and plan out what to write. I did more experimentation with ImageJ, really starting to understand the program after further instructions and advice from a program expert. I developed my second imaging process theory, this time with better results. 

7/15 Friday - I finished the Results and Discussion section of my poster using the second imaging process theory and sent the poster to Apple for feedback.

-Patricia Acorda 

Wednesday, July 20, 2016

Bulk Metallic Glass

Bulk Metallic Glass are a class of materials that offer many desirable and unique processing routes and properties. An improving universal description of the structure and behavior of BMGs should lead to better tools for their design and for commercialization. Metals and alloys normally exist as crystalline materials. Compared to crystalline, bulk metallic glasses often exhibits very high strength, corrosion resistance, and elasticity. BMGs with the best glass-forming abilities are Zr- and Pd-based, are often reported in terms of a “critical casting diameter”. 
Small changes in composition can lead to large changes in properties. Several rules exist correlating a single physical parameter with glass-forming ability. However, only some of these correlations are useful for predicting glass-forming ability. These rules still require knowledge of properties of the alloy to allow prediction of glass-forming ability. A better understanding of the atomic structure of BMGs is developing through analyses of the simplest binary alloys. The BMG structure is a randomly packed assembly of the different atoms. However, although BMGs lack long range atomic order, they do exhibit short and medium range order over several atomic lengths.
Compared to other engineering materials, BMGs are used in small volumes, but most are very strong in compression. Many engineered parts require good toughness and strength in tension rather than compression. Similarly, BMG composites have been well designed by matching the micro-structural length scale of the crystalline phase to crack length scales. It resulted in high tensile ductility and fracture toughness. A unique property of BMGs is their ability to reversibly transform from the low-temperature glassy state into a supercooled liquid state above a glass transition temperature. Many BMGs are not well adapt to conventional forming at room temperature. Forming in the SLR has been successful on most systems. Extrusion, closed-die forming of micro-components, and many more innovative techniques such as blow molding have also been demonstrated.
BMGs can be formed into very precise shapes while in the SLR.
Production technologies for crystalline metals have matured, but BMGs are in their first stage. A variety of casting techniques and fluxing strategies are now used to produce BMGs. It is said that BMG properties are dependent on cooling rate, so the choice of manufacturing route can be important. Gas atomization or mechanical alloying can be used to produce amorphous powders.

Biomimetic Organization: Octapeptide self-assembly into nanotubes of viral capsid-like dimension

Recently, scientists have been interested in the ability of certain simple molecules to spontaneously form into nanostructures because of their potential applications in biotechnology and materials science. Certain biological self-assemblies including tobacco mosaic virus, capsid proteins, tubulin, and actin are able to form long filaments under 100nm in length, but their high fabrication costs eliminate their potential for use in practical applications. A simple alternative based off of using simpler molecules has arisen, though it requires a deeper understanding of the relationship between the molecular structure and the self-assembly process of the nanostructures. Additionally, before now, no simple synthetic molecule was able to self-assemble into nanotubes in the length range of 20-30nm.

Lanreotide, an octapeptide synthesized as a growth hormone inhibitor, is an excellent example of a molecule able to self-assemble into a well-defined nanostructure. By studying this self-assembly process, researchers were able to reach a better understanding of the systems involved in the formation process. It was determined that diameter of the nanotube could be changed by modifications to the molecular structure and that the process investigates the minimal interactions in order to form large self-assembling nanotubes (as observed with the biological self-assemblies mentioned above). Conclusively, the exemplary system of self-assembling lanreotide nanotubes in water can still be investigated for further applications.

Biomimetic Organization: Octapeptide self-assembly into nanotubes of viral capsid-like dimension

-Patricia Acorda

Friday, July 15, 2016

Lab Internship: Week 2

Monday 7/11
Professor Miller-Jensen and her lab members made a presentation on microfluidic devices for high school students in the SCHOLAR program.  They asked me to observe and make notes on the presentation so they could improve it for next year.  I actually assumed the presentation started at 9:30 am because of a sign I saw outside the presentation room, but it was really at 11:15 am.  During that time gap, I helped Laura, a research assistant, and Victor Bass, a graduate student, prepare pH solutions for the high schoolers to test their devices on.  We prepared four solutions, one with a pH of 4, two with a pH of 7, and one with a pH of 10.  I used a pipette to put the solutions in containers labeled A, B, C, and D, one container for each solution.  The presentation was good, the students seemed somewhat interested in microfluidics.  Some even asked if they could intern at Professor Miller-Jensen's lab since I was interning here.  The professor also had the students do a hands-on activity where they make their own microfluidic device using filter paper and crayons.  Students would draw a path with a crayon and melt the wax to create a channel so the fluids cannot escape.

I asked Victor about the goal of my internship, and he said that he was mainly teaching me technical skills so that I know how to operate and conduct myself in a lab in preparation for the future.  He also wants me to make a passive-flow microfluidic device and test its efficacy for another experiment which will examine how heterogeneity affects cell-to-cell communication and the production of cytokines when a cell from the immune system (T Cell) is infected by HIV.

Tuesday 7/12
I made more devices by producing a PDMS solution and making molds of the passive-flow device. Victor let me do all the work by myself; he just watched me do it so I would not make a mistake.  I polymerized a chemical by mixing it with a curing agent in a 1:10 ratio (25 g of chemical, 2.5 g of curing agent).  After, I poured the PDMS into a container in the hood and used a centrifuge to get rid of the bubbles in the substance.  I had to make sure the centrifuge was balanced by placing a container filled with water that was the same amount as my PDMS so it would not tip over.  After using the centrifuge, I cleaned out the passive-flow device to make sure there was not any extra residue of PDMS present, otherwise it would mess up the device.  I poured the PDMS into the container holding the passive-flow device and baked the cast in an 80˚C oven for 2+ hours.  While the mold baked in the oven, Victor and I split my cells (J65C 6.6 and 4.4 -- just labels for the cells) in a 1:4 ratio (1 mL of cells: 4 mL of whole solution) to make sure they don't overpopulate.  In each container, there was 8 mL of solution with 1 million cells per mL in the solution.  I split the cells by pipetting the cells back and forth in the container to get rid of any clumps in the cell populations.  I then placed only 2 mL of cells in each container and took out 6 mL.  I dumped the 6 mL from 4.4 into a bleach container to kill the cells, but I put 6 mL of cells from 6.6 into a container so we could run them through the devices we already made.  I replaced the 6 mL I removed from the containers with 6 mL of RPMI, which is the medium we use to grow our cells in.  We then put the cell containers in an incubator so they could grow.  We put the sample of cells from 6.6 in a centrifuge so that we could get a pellet of cells at the bottom of the container.  This was so we could remove the medium from the sample and get 5 million cells in 1 mL of media (5 million cells is the optimal amount of cells to be tested in a microfluidic device).  Victor then used a pipette to place the cells into a cell strainer to ensure there were no clumps of cells, otherwise the traps would not work well.  Victor and I took two of the devices we made last week and cut out holes for the outlet and inlet so cells can flow through the channel.  We then treated the devices and microscope slides with oxygen plasma in the Plasma Etch so we could attach the devices to the slides and make the devices hydrophilic so fluid can flow through them.  After securing the devices, Victor ran distilled water through the devices to ensure flow could occur and hydrophilicity remains in channel.  Using a pipette, Victor and I removed the water from our devices and placed 50 microL of our cell samples into the devices.  We then observed the flow of the cells in the device under a microscope.

Wednesday 7/13
I talked with Professor Miller-Jensen about my view on the presentation and any notes or suggestions I had for her.  My main suggestions were to go more into detail and explain some concepts further since I believed many of the students were confused at some of the content.  I also recommended making visuals to help students picture how a microfluidic device works/flows so students can understand laminar flow better.  The professor told me about how she wished there could be a workshop so she could explain the concepts that go with microfluidics more in detail and have more hands-on activities for students.  It would definitely be a good idea; however, I am unsure if students will actually be willing to go to a workshop for multiple days to learn about microfluidic devices.

I made more PDMS molds of microfluidic devices with Laura.  Instead of making molds of the passive-flow device, we made some for the well chip and the flow-patterning device.  Still unsure of how they function, so I need to learn about that.  I followed the same procedure as I did with Victor on Tuesday, July 12th.  Laura and I punched holes in the devices for the inlets and outlets, and we cleaned the devices using tape to get rid of large sediment, as well as cleaning solutions and methods such as ethanol and sonication.  Sonication is the use of sound waves to vibrate substances, and in this case, it is used to clean the devices and to get rid of extra PDMS on the device.

There was a lab meeting on the discussion of a paper on the microbiome in the human gut and how a species of bacteria (B. dorei) with LPS (lipopolysaccharide) can inhibit immune education (immune system adapting to illnesses early on -- hygiene hypothesis).  Laura also discussed her progress on her project of studying cell signaling.  I did not understand a thing from the presentations.  I still need to learn more about science and the project since I only have high school biology as my background.  So far, all I was taught in lab was technical skills

Thursday 7/14
Victor and Laura did not have much planned for me today.  The professor actually has Victor working on projects for the rest of this week and almost all of next week.  However, I still managed to accomplish a few tasks.  I cultured and split my cells (J65C 4.4 and 6.6).  4.4 seemed more dense, meaning that there seemed to be more cells in the container, but Victor assured me that they were just larger.  Victor did not guide me, he was merely there for supervision and if I needed to ask any questions.  I successfully split the cells with little to no trouble.

Laura then asked me to make some devices with PDMS, so I did that, too--on my own.  Laura did not give me any instruction.  I successfully made two PDMS solutions to be used as molds for the well chip and the flow-patterning device.

Laura also sent me some reference articles and papers with background information on her project where she is studying cell signaling.  I am not completely sure on the specifics of her project, so once I finish reading the articles, I will be sure to ask her.

Internship Week 2

One of the readings I did this week provided foundational information about the computer simulations of liquids. My focus was on the sections of the literature that dealt with finite difference methods and the Verlet neighbour list.
The complete set of positions, velocities and accelerations can all be found by taking infinite derivatives of an initial value from one timestep to the next. However, this would not be able to produce entirely accurate figures because it does not account for the equations of motion that can affect this simulation. This is where a corrector step comes in, that “corrects” for these inaccuracies. Usually only one corrector step is executed because they are generally very costly. A successful simulation algorithm has characteristics such as quick, small in terms of memory, allow the use of a long time step, duplicate the classical trajectory as closely as possible, satisfy the known conservation laws for energy and momentum, and be simple in form.
The Verlet neighbour list is a process of taking measurements from a center particle to particles that exhibits forces on it. These particles are within a space referred to as a potential cutoff sphere. There is also a larger sphere around this cutoff called a “skin” that are close enough to the center particle that they have the potential to move into the inner circle and exhibit forces on it at a point in time. For this reason, their distances from the particle is measured as well. This list is updated at certain time steps, usually 10-20 timesteps.
At my internship this week, I have done a lot of reading to understand the computer simulations that are being done and the principles behind them, in addition to learning to code. I have read a sample code and am currently attempting to replicate it without looking at it. I am writing down the actions and purposes of each line of the code on and then looking at this “translation”, in a sense, and attempting to reproduce it solely from my knowledge of coding.

Monday, July 11, 2016

How Nanotechnology Works

           Nanotechnology has much to offer. To understand how it all works you must first understand how small a nanometer really is. A nanometer is one billionth of a meter, which is smaller than the wavelength of visible light. Or, a hundred-thousandth width of human hair. Nanotechnology deals with anything measuring between 1 and 100 nm. At the nanoscale particles do not behave as they should, they move erratically. Although troublesome to pick and place atoms, once you succeed with that, you can build and produce almost anything with exact precision. On the nanoscale, everything that you think you know, do or may not apply.
           Nanoparticles are so small that you can not see them with a light microscope. Nano scientists must use a scanning tunneling microscope or a atomic force microscope. Scanning tunneling microscopes use a weak electric current to probe the scanned material. Atomic force microscopes scan surfaces with an precise fine tip. Fortunately, both microscopes send their data back to computers, which can analyze the information and display it on a monitor.
           Nanowires and Carbon Nanotubes have sparked a particular interest in scientists. Nanowires are a string of wires with a diameter of about one nanometer. Electronic devices, like computer chips or cell phones, would improve with the help of nanowires. A carbon nanotube is a cylinder of carbon atoms. Their properties all depend on the direction and way they were rolled. With the right alignment of atoms, the carbon nanotubes material is hundreds of times stronger than steel, and six times lighter. Engineers are trying to make building material out of it, especially for planes and car. It would increase efficiency and strength.
          There are many products that are being benefited from nanotechnology. In older sunscreens, the particles were much larger, which is why the color was white. Newer sunscreens have nanoparticles of zinc oxide or titanium oxide which leaves no color behind. Also, scientist have created clothing that protects from UV radiation and stains. They coat the outside of the fabric with zinc oxide nanoparticles, same as sunscreen, for better UV protection. Other clothes have nanoparticles of tiny hairs that repel water and other materials.
           Nanotechnology is drastically going to change the world. Like in the world of "Star Trek" replicators can be a very real thing, it is called molecular manufacturing.  The goal is to place millions of atoms together by nano-machines, then, a desired product can be produced. Professor Richard Smalley explains that in order for molecular manufacturing to actually take place, there must be trillions of assemblers working together. With one assembler it could take up to millions of years. Eric Drexler believes the assemblers could first replicate themselves, then exponentially reproduce to manufacture products. Manufacturing costs would decrease, leaving consumer goods cheaper and stronger. Nanotechnology might have its biggest impact on the medical industry. By working on the nanoscale, one can attack and reconstruct the body. There are speculations that nanorobots can enter the body reversing the effects of aging and increase the average life expectancy.
            Before any production of nanotechnology is created, we must learn more about their materials and properties at the nanoscale. Elements behave differently at the nanoscale so, there is some concern about whether or not nanoparticles could be toxic. Doctors are skeptical because they are not sure if the particles could easily pass through the blood-brain barrier, a membrane that protects the brain from harmful chemicals in the bloodstream. Not only can nanotechnology be used for medical purposed but if harnessed properly, it may allow us to create more powerful weapons. The most important thing we do is to carefuly examine the challenges, risks, and ethical dilemmas.

All You Wanted to Know About Electron Microscopy...But Didn't Dare to Ask!

What Is Electron Microscopy?
One of the earliest and significantly powerful microscopes was a light microscope created by Antony van Leeuwenhoek. This microscope was made up of a powerful convex lens and an adjustable specimen holder. It's suspected that this microscope was able to magnify objects up to 400x, leading van Leeuwenhoek to discover protozoa, spermatozoa, and bacteria. 

The first limitation with van Leeuwenhoek's microscope was the quality of the convex lens, but that could easily be fixed by adding another lens to magnify the image produced by the first lens. This configuration gives us the compound microscope, the basis of modern light microscopes. Today's light microscopes can give us a magnification of up to around 1000x, but they were limited by the wavelength of the light used for illumination. Small improvements to magnification could be reached using a smaller wavelength of light (blue or ultraviolet) or by dipping the specimen and the front of the objective lens in oil (which has a high refractive index). 

Then, in the 1920s, accelerated electrons in a vacuum were found to behave just like light, except they had a wavelength about 100,000 smaller than that of light. This meant that they could be used for higher magnifications than light. Instead of being manipulated by glass lenses and mirrors, electrons could be manipulated using electric and magnetic fields; giving rise to the first transmission electron microscope (TEM) built by Dr. Ernst Ruska at the University of Berlin. 

The TEM has a resolving power of 0.1 nm, meaning that using the microscope, we are able to tell the difference between two points down to 0.1 nm (Points less than 0.1 nm apart will be seen as a single point). This is significantly powerful magnification, compared to the resolving power of the naked eye in proper light, which is 0.2 mm.

Saturday, July 9, 2016

Asymmetry in Crystallization Behavior

The article discusses the asymmetry between the critical heating and critical cooling rates of Vit 1, a glass-forming liquid. In this context, the word glass is not exactly used to refer to a transparent material used in windows or cups. This glass is an amorphous solid that is not at equilibrium as opposed to a crystal, a solid that is more ordered and at equilibrium.
It was observed that the heating rate required to suppress crystallization was significantly higher than the cooling rate required. The cooling rate was found to be 1 K/s while the heating rate was 200 K/s. This may be attributed to the fact that nuclei formed during cooling and heating are exposed to different growth rates, likely a general feature for metallic systems.
The significance behind bypassing crystallization is to allow the creation of a pure glass or super-cooled liquid; crystallization disrupts the disorder of the glass’ atomic arrangement. It is defined as the point at which the crystalline volume fraction within the melt reaches some small but finite value. Crystallization is identified when there is a rise in temperature, referred to as recalescence, caused by the release of the heat of fusion at the solid/liquid interface during crystallization. This leads to an increased heating rate.
In steady-state nucleation, the number of nuclei formed during the heating of a purely amorphous sample up to the liquidus temperature is the exact same as the number formed during cooling from the liquidus temperature down to the glass transition temperature. It was concluded that if any metallic liquid is quenched and forms an amorphous solid, it has to be heated at an extensively faster rate to combat crystallization.

Friday, July 8, 2016

Lab Internship: Week 1

During my first week at Kathryn Miller-Jensen's lab, I actually accomplished more than was expected.  I learned how the lab conducts experiments, how they grow their test cells, and how they make the microfluidic devices used in the experiments.  Victor Wong, one of the research students, taught me how to culture and grow jurkat cells-an immortalized lineage of T cells-so they can use those immune cells to test how they respond and communicate when introduced to HIV.  Victor explained that when introduced to HIV, cells responds and secrete differently due to biological noise, meaning that even if cells are genetically identical, they act differently because of gene expression and proteins.  Laura and Victor both showed me how to make the microfluidic devices used by creating a PDMS solution and making a mold of the device using a master slide that holds the design.  The PDMS solution was created by mixing a chemical and a curing agent together and spinning the bubbles out of the mixture with a centrifuge so the bubbles would not affect the device.  If bubbles were present, and they almost always were, we used a gas pump to pump out the bubbles and used a air hose to pop them.  Victor and Laura baked the solution in an oven at 80˚C for 2 hours and cut out the devices with scalpels and blades.  After, they stored them in containers for further use.  



The ELISA, or enzyme-linked immunosorbent assay, is a method used to measure antibodies, antigens, proteins, and glycoproteins.  The ELISA has been used for pregnancy tests, diagnoses of infections, such as HIV, and the measurement of cytokines, or proteins secreted by immune cells in response to pathogens.  The ELISA plate is coated with a capture antibody which is raised to be against the antigen being used in the experiment.  Next, the researcher introduces a sample of what they're testing, like a serum or blood sample, and the antigens in that sample are then attached to the capture antibodies.  Then, the researcher adds detection antibodies, which are labelled with enzymes. After, the researcher adds a type of substrate that reacts with the enzymes and turns into a colored product.  From that color, the researchers can determine the antigen concentration in each sample.


Shape Identification and Particles Size Distribution From Basic Shape Parameter Using ImageJ

           The researchers at Mississippi University developed an ImageJ plug-in that extracts the dimensions from any digital array of particles soon after identifying their shapes and determining their particle size distribution. The paper describes the plug-in's development and its application to food grains and ground biomass. It is discovered that the plug-in was applied successfully to analyze the dimensions and size distributions of food grains and ground Miscanthus particle images.
          This research deals with three main objectives: the development of ImageJ plug-ins, determining the effects if shape, size, and orientation, and demonstrating an application to samples. Plug-ins are useful because when trying to achieve your output manually, it can be very time consuming and crosses the line of user subjectivity. The computer vision method determined the distribution and amount of garlic, parsley, and vegetable ingredients in pasteurized cheese with an accuracy of over 88%, compared wth the sensory method. However, there are multiple methods used because quick and accurate particle size distribution analysis is most desirable; especially when dealing with granular or particle materials. With ImageJ one can map the actual particle to an equivalent ellipse and perimeter match. The study establishes that fitted ellipse dimensions produce relativity good estimates.
           The results and discussion explained multiple plug-ins that were tested. First, they deliberated that shape-based corrections factors that fit ellipse dimensions are essential for measuring linear dimensions. Fitted ellipses dimension better than bounding rectangles because it produces less deviation. It was then noted that for triangle shapes, the FMR forms other shapes for easy identification. These findings of shape dimensions leads to the development of shape identification strategy. Later, an image of a known geometric shape and measurement was drawn and tested. However, some miscalculations occurred with samples. It was found that to avoid misclassification further research would need to be conducted.
           The effect of particle shapes with getting the measurements in length and width was calculated using geometric reference particles through absolute deviation. It was concluded that the shape does not have any effect on mean absolute deviation, and the area was inversely proportional to the mean absolute deviation. They found that their results indicated good shape classification of the plug-in only with round particles. It was also observed a coincidence of arithmetic and geometric mean lengths. The normal distribution shaped curve was seen towards the left, allowing an increase since the length was so small. The final plug-in that was tested was a drawback of image processing method. However, it can be used only with separation of a mechanical system.
           The research conducted shows the great potential of ImageJ plug-ins proving to be efficient and reliable. They are ready to give solutions in specific needs to machine vision applications.

Thursday, July 7, 2016

Shape Identification and Particles Size Distribution From Basic Shape Parameters Using ImageJ

Abstract, Introduction, & Results and Discussion

In various fields, quick and accurate particle size distribution analysis is desired. Currently, many distribution analyses and created manually, which leaves room for inconsistencies and can be fairly time consuming. An ImageJ plug-in has created a way to use computer (vision-based) image processing as an alternative or replacement method for measurement, identification, and size distribution analysis.

Though ImageJ comes with a built-in option for analyzing particles, this option presents multiple problems; including not providing dimensions of practical interest (i.e. the length and width of the particles), large deviations from the actual dimensions when using the available dimensions from the bounding rectangle of the particle, and deviations influenced by the particle shapes. Thus, it was concluded that fitted ellipse dimensions be used as opposed to the rectangle dimensions. But, these dimensions still showed some deviation, such that they still need to be corrected for accuracy. The research work proposed deals with developing an ImageJ plug-in that is able to analyze particle size distribution by applying correction factors after determining the particle shapes and then accurately give the dimensions of the particles through corrected fitted ellipse dimensions.

The Results and Discussion section discusses several aspects of the ImageJ plug-in that were tested. One aspect is how using the major and minor axes of the fitted ellipse for dimension measurements gave more more accurate results than using the bounding rectangle dimensions. Through several analyses with differently shaped particles with different orientations, it was clearly determined that the dimension of fitted ellipse was a better option than the bounding rectangle for dimensions. However, shape-based correction factors were still needed for measurement of linear dimensions.

Secondly, shape parameters of particles of geometric shapes were also observed in order to develop the shape identification strategy employed by the plug-in. These shape parameters included reciprocal aspect ratio (RAR), rectangularity (RTY), and feret major axis ratio (FMR). Each gives specific values for different shapes depending on their parameters, which were implemented into the plug-in to classify the shapes of particles.

The shape identification and dimension measurement accuracy of the plug-in was then tested using an image containing a group of known geometric shapes and dimensions. This test showed a high accuracy for the calculations of the areas but a significantly lower accuracy for the perimeter calculations. However, it was discussed that the significantly lower accuracy for the perimeter calculations was not of high importance since they can be determined other ways and are not the primary focus of the plug-in. It was also observed that as shapes got smaller and smaller, after a certain point they could only be represented by a straight square in the image. This would lead to some misclassifications with samples, as shapes were represented by less and less pixels. The conclusion was that further research would have to be made in order to avoid misclassifications associated with  (1) overlapping shape parameters values, (2) smaller shapes being represented by few pixels, (3) irregular shapes that deviate from the test cases, and (4) overlapping particles.

After, the effect of particle shape, size, and orientation on the determined length and width were tested. The test for the effect of particle shape on the determined length and width showed small mean deviations across all shapes. The test for the effect of particle area (size) on the determined length and width showed that the deviations tended to decrease with an increase in particle area. Though the lengths of triangles and widths of ellipses showed increased deviation, this was not significant enough to show drastic effects on the determined length and width. Finally, the test for the effect of particle orientation on the determined length and width showed only minor variations with a gradual increasing trend with orientation angle.

Lastly, the plug-in was used to create a size distribution analysis with food grain and with ground biomass. Both tests proved the plug-in to be effective, and gave hope for its growing potential. ImageJ plug-ins have the power to provide tailor-made solutions to suit the specific needs of machine vision applications.

-Patricia Acorda

Wednesday, July 6, 2016


           ImageJ, created by Wayne Rasband, of the Research Services Branch, in Bethesda, Maryland has many unique characteristics. The image processing program is readily available for all users, license free, and runs on any operating system. Throughout its six years of development and redesign, ImageJ has been downloaded tens of thousands of times, proving to be a reliable image processing program.
            ImageJ program is limitless because of the availability of extensions: macro and plug-ins. For example, a macro can be written that acquires an image every ten seconds, then it stores it in a sequence. Plug-ins are external programs, which offers imaging processing capabilities that do not exist in the core of ImageJ. They have revolutionized ImageJ, bringing new ideas to the programs.
          The imaging capabilities of ImageJ are known to be unlike any other. Not only can it preform basic operations and manipulations, but it also includes more sophisticated operations like dilation and erosion. One of its strong capabilities is that ImageJ can run on multiple platforms, which is called Cross-platform.  Due to the large developer community, there are always new and upcoming plug-ins and operations being developed. If a file format is not supported, the user community usually can develop support in a few days. Although there is no support hot-line, the large user base may communicate through a mailing list. It is easy to access and there are many users available online for aid; one can go through the thorough manual giving you step by step instructions.
          Just like everything else in the world, ImageJ does have its faults. Sadly, the sophistication of commercial image processing program's algorithms, lead them to be drastically faster. Also, the program does require having little knowledge for installation and first steps. There is not on-site installation or training unlike commercial image processors. Since ImageJ is constantly being redeveloped and designed, bugs and "undocumented features" can find its way in. The unwanted bugs can taint your on-going project.
          ImageJ has attracted countless scholars and students because of its accessibility and efficiency. The program proves that imaging is on the boundary between a field of science and a field of engineering.

Friday, July 1, 2016

Generalized Selection, Innate Immunity, and Viruses Conclusion

After conducting their research, the scientists discovered that culturing VSV in MDCK cells led to populations of generalists, meaning that the viral progeny can adapt to the host and evade the immune system of similar related hosts.  Generalists can adapt to different habitats and function accordingly in order to survive.  The specialist population, or the viral progeny from the HeLa-adapted populations of VSV, were believed to be less fit in unpredictable environments than the generalist populations since they did not experience selection and did not adapt to the innate immunity of the host.

One of the factors of host breadth, or the ability to adapt to hosts, is cell-surface proteins.  Proteins on the cell wall dictate what viruses can attach to the cell and what viruses cannot.  The viruses that can attach are the ones that can perform viral replication.  VSV, the virus used in the experiment, is not affected by the proteins on the cell wall, meaning that they will have no problem entering the host.  This means that the proteins on the cell wall will not affect the fitness of the virus, only the immune system can affect the virus.

The results of the experiment showed the scientists that the evolution and selection of a virus are key factors in determining the fitness of a viral population.  Understanding how the environment affects a virus can show how a virus can maintain beneficial host-antagonistic alleles, or alleles that code for a virus's virulence, including the ability to evade the innate immunity, or immune system, and infect the host.  A competent IFN environment--an immunocompetent host with active interferons that activate the immune system and protect the host cells--influences VSV to maintain the beneficial alleles to survive and successfully replicate in the host.  However, a relaxed IFN environment, or an immunodeficient host, leads to the loss of the host-antagonistic alleles.

The viral progeny from the alternate-host populations lived half of its evolutionary life in the MDCK host.  However, the viral progeny did not efficiently adapt and maintain the alleles needed to evade the innate immunity because the VSV was not under the selective pressure long enough.  The VSV was cultured in differing environments, leading scientists to believe that the adaptation to multiple environments influenced the maintenance and expression of beneficial alleles, and ultimately generalization.  Despite this, the viral progeny did not have high fitness and did not survive as well as hoped in the immunocompetent novel hosts (PC-3 Cells).  The novel hosts were used as test sites to determine the fitness of the VSV.  The researchers believed that the maintenance of the host-antagonistic alleles led to the loss of alleles that would express for the growth of the VSV in 3 of the 4 alternate-host populations.

The scientists also concluded that only the innate immunity affects viral evolution, not the properties of the host.  They tested this with the VERO cells, or primate cells.  Scientists grew the generalist viral populations in MDCK cells, or canine kidney cells, and discovered that they also had high fitness in the VERO cells since the MDCK-evolved populations were able to evade the innate immunity of the VERO cells due to robust viral selection.

With their research, the scientists ultimately concluded that the innate immunity of a host impacts the evolution of a virus.


Integrated Circuits-Memory Grows Up

The groundwork for most mobile phones include non-volatile memories(NVM). These NVMs have contributed to the decreasing cost and power consumption of integrated circuits, allowing the cost for 1 gigabyte(GB) to near $1. NVMs incorporate flash memory; this adds a gate to existing MOSFET transistor geometry in order to control the containment and release of an electric charge, whose presence or lack thereof is the bit of information stored by the NVM. The NVMs consist of  a storage element and a selection device that addresses it in a memory array.
In December, many researchers gathered to discuss the issue of maximized scalibility. Companies like Samsung, Intel, and Toshiba proposed a variety of 3D structures that all stack several memory layers to increase the memory density per unit area of silicon in order to decrease the cost per bit. The challenge with 3D memory technologies using thin-film transistors is guranteeing an effective performance and reliability. 
There has been a shift to a two-terminal device rather than the standard three-terminal transistor to maximize storage. Researchers from Intel introduced the use of chalcogenide to address phase-change memory storage elements by exploiting the ovonic threshold switching effect. The chalcogenide alloys can replace transistors in accessing storage elements inside a memory array. In addition to this, other alternative non-volatile storage elements that do not use a floating gate have been developed. These include resistive random access memory based on metal oxides and charge trap memories such as SONOS. 
Al Fazio of Intel has delivered a warning that increasingly complex algorithims will need to be developed to make sure the flash memory behaves as anticipated in its most popular form NAND. He also recognized that 3D cross-point memories would be a vehicle to produce the high speed and low cost that can positively shape the memory capacity in storage applications.

Image Processing with ImageJ

Recently, the discipline of imaging has come to play an important role in multiple research areas, leading to many of the professionals in these research areas to use image processing software. One increasingly popular - though quite old - image processing software is ImageJ. ImageJ is well known for being a public domain software that can run on any operating system. In addition to this, ImageJ appeals to users with its imaging manipulation capabilities as well as its reliable and immense user community that creates a forever expanding web of ideas and innovation.

Being a public domain software means that ImageJ is free, and anyone can download and use it license-free. This automatically makes it a popular choice among the other image processing software, the current rate (at the time this article was written) of download being about 24,000 downloads each month. The fact that ImageJ is able to be run on any operating system is also quite a feat, making it more available to anyone who wishes to use it.

Unlike other image processing software, ImageJ has a vast amount of imaging capabilities, which are continually expanding. These capabilities include the support of all common image manipulations, basic and more sophisticated operations, mathematical operations, visualization operations, and surface and volume rendering. In addition to the core capabilities of the software, extensions are also available online made by other users. These extensions include macros, which make it easier to automate repeated tasks and plug-ins, which are external programs that offer additional image processing capabilities. These programs have made ImageJ extremely helpful to many researchers, especially in the science and engineering fields.

One aspect of ImageJ that really keeps it relevant to modern needs and well-updated is that fact that is has an immense user community that offers extensions to the software as well as guidance. Because ImageJ is a public domain software, there is no hotline for problems, giving way for users across the world to open up about different solutions and ask questions to each other. This has really contributed to ImageJ - despite being a 6 year old software - to continually evolve as time moves on and keep its functions updated with modern technology. As such, ImageJ is still used today in many research facilities, serving both the engineering and science fields well.

-Patricia Acorda