WISRD Research & Engineering Journal

Volume 4, Issue 1

WISRD Research & Engineering Journal Volume 4, Issue 1

Spring 2022 Wildwood Institute for Research and Development Editors: Noe Schwartz, Scott Johnson, Megan Noel

Welcome to the fourth issue of the official journal of the Wildwood Institute for STEM Research and Development (WISRD). In the following pages you will find papers describing the results of original research projects produced by our institute and its collaborators. The editors and contributors of this journal would like to thank WISRD COO—Joe Wise—and WISRD Publisher—Scott Johnson. CONTENTS Preparations for the Detection of Radio Waves Created by High-Energy Cosmic Rays using the WISRD Cosmic Ray Detector L.Perttula, R. Cortez, S. Zohar, J. Reis, J. A. Wise……………………………………………… 2 Ex situ coral growth: Using of artificial intelligence for quantifying growth of small polyp stony (SPS) coral microfragments in a marine aquarium L. Guiga, Dr. K. Griffs, J.A. Wise ……………………………………………………………... 29 Ex situ coral growth: A technical report on development of 3D printed ceramic substrates for growth of small polyp stony (SPS) coral microfragments in a marine aquarium T. Albano, Dr. K. Griffs, J.A. Wise ……………………………………………………………. 72 Ex situ coral growth: A technical report on tank lightning and water chemistry parameters for growth of small polyp stony (SPS) coral microfragments in a marine aquarium M. Papadopoulos, H. Witsken, Dr. K. Griffs, J.A. Wise ………………………............................91 Planning and Constructing the Big Bear Observatory P. Kelly, J.A.Wise………………………………………………………………………………106 Detecting Occultations of Trans-Neptunian Objects (TNO’s) with Light Curve Graphs R. Allenstein, J.A. Wise………………………………………………………………………….13

1

Preparations for the Detection of Radio Waves Created by High-Energy Cosmic Rays using

the WISRD Cosmic Ray Detector L. Perttula, R. Cortez, S. Zohar, J. Reis, J. A. Wise

1. Introduction Cosmic rays reach the Earth’s surface at a rate of 1 per square centimeter per minute. These cosmic rays interact with the upper atmosphere of the Earth to produce a shower of secondary particles that can be detected by a scintillation detector. WISRD is using its cosmic ray detector in conjunction with its radio telescopes to investigate reported radio waves spawned by incident

“cosmic ray events. 2. Background

Cosmic rays are extremely high-energy particles that are observed from earth as usually being caused by solar flares from our Sun, they can also be caused by other high-energy events from solar bodies outside our solar system. The later cosmic rays are called Galactic cosmic radiation and come from the remnants of supernovas, powerful explosions that occur during the last stages of the life cycle of massive stars, defined as stars at least eight times larger than the Sun. When these high-energy particles hit our atmosphere, they interact to form a “spray” of particles (Fig.1). Among these secondary particles are short-lived positive or negative pi mesons that decay into positive or negative muons which are detected when they pass through our cosmic ray detectors.

Figure 1 . Diagram of a cosmic ray splitting apart in the atmosphere by radioactivity.eu.com.

2

Image of cosmic ray showers – Bing images Previous work by Clancy W. James (On the Nature of Radio-Wave Radiation from Particle Cascades, 2022), suggests these high-energy particles sometimes trigger radio waves, believed to be from Cherenkov 1 or Bremsstrahlung radiation 2 . To study this, WISRD’s cosmic ray detector is used in conjunction with its radio telescopes to look for and document radio events coincident with cosmic ray showers that may connect the two events and provide further data in the field of high- energy physics. 3. Cosmic Ray Detection WISRD maintains and operates a Quarknet (Quarknet.com) cosmic ray detector consisting of four scintillators with photomultiplier tubes (PMT’s). The Digital Acquisition Board (DAQ) was designed at FermiLab (see appendix). The first step is to install the counting software “Equip” from FermiLab and calibrate the detector. From theory and experiment we know that the count rate should be about 1 event per square centimeter per minute. When setting up a cosmic ray detector, the first voltage to establish is the threshold voltage of the PMT. This voltage determines a threshold that must be exceeded for counts to be counted as a cosmic ray event capture. Setting the threshold level too high will miss all but those events that correspond to a large amount of energy deposited in the detector; setting the threshold too low will result in background noise within the electronics being mistaken as event counts. The most efficient threshold voltage is determined by increasing the PMT threshold voltage until the count rate change is reduced, i.e., the “kink” in the curve. This kink is at around 300mv (see Fig 2).

3

Figure 2 . Graphical representation of data for threshold counts showing results from Windows PC running Equip software, The best threshold voltage is determined by increasing the threshold until the count rate change is reduced, i.e., the “kink” in the curve. This kink is at around 300 mV. This procedure was followed for all four scintillators (also referred to as paddles). 4. Raspberry Pi Because the cosmic ray detectors are being deployed on the roof of the facility housing WISRD, methods for control and data acquisition had to be fully explored. A raspberry pi is a light, inexpensive and compact computer that uses a small amount of power and has the capability to communicate with other devices over the internet. A comparison study of the FermiLab software “Equip” running on a Raspberry Pi was done in conjunction with the PC version of the software, to determine the reliability of the Pi. Comparing the established threshold levels (see Fig. 2 and 3), the Pi performed as expected and confirmed that we could use the Pi for our experiments.

Figure 3 . Graphical representation of data for threshold counts showing results from Raspberry Pi running Equip software. Data was compared to that taken on the PC version of the software (see Fig. 2) and determined to be acceptably comparable. All data and power cables were tested for reliability by using threshold reproducibility (See Figs. 4 and 5 for two examples).

4

Figure 5 . This is an expected threshold curve using cable set 2 with paddle 1

paddle 2, channel 2, cable 2 (.809v)

0 1000 2000 3000 4000

0

200

400

600

800

Figure 6 . This is an expected threshold curve using cable set 2 with paddle 2. To determine the optimum operating voltage for each of the photomultipliers (PMTs) the voltage on channel 1 was raised until it detected 40 – 60 counts per second. Using this voltage, PMT’s 1 and 2 were stacked, and counters were set to count when a signal was coincident to channels 1 and 2. The voltage on channel 2 was then swept from .550 - .755 mV. A plot of the counts on the varying channel 2 and the coincidence count vs voltage can be seen in Figure 7.

Figure 7 . Plateauing channel two using channel one as a fixed voltage (ch 2 - .722mv). Note that as the voltage is increased, there is a point where the coincidence count levels off. That point is the ideal operating voltage for the PMT. Once channel two was optimized, it was used as the fixed voltage and channel 1 was swept to find its plateau (Fig 8).

5

channel 1 vs fixed channel 2

1000 1200

0 200 400 600 800

0.65

0.7

0.75

0.8

0.85

count #1 Coincidence

Figure 8 . Plateauing channel one using channel two as a fixed voltage (Ch 1 - .814mv) The remaining channels were plateaued using channel one as the fixed voltage, as seen in Figures 9 and 10.

channel 3 vs fixed channel 1

1000 1500 2000 2500

0 500

0.6

0.65

0.7

0.75

0.8

count #3 Coincidence

Figure 9 . Plateauing channel one using channel two as a fixed voltage (Ch 3 - .732mv).

channel 4 vs fixed channel 1

2000

1500

1000

500

0

0.65

0.7

0.75

0.8

0.85

counts #4 Coincidence

6

Figure 10 . Plateauing channel one using channel two as a fixed voltage (Ch 4 - .803). After plateauing the channels, the potentials across each PMT were set. The final check was made by examining histograms created from counts vs time over threshold for the counters as seen in Fig. 11; these curves indicate a result that we consider a successful calibration as they are symmetrical and centered a little over 20 ns.

Figure 11 . Histogram visualizing time over threshold for the four channels. 5. Preparing the Physical Detectors The physical detectors had to be light proofed, kept dry, and protected from the extra heat of being placed on a roof in Los Angeles, California.

Figure 12 . WISRD members designing and building weather proofed boxes for scintillators. Each scintillator with PMT was wrapped in black garbage bags, black theater spotlight paper, and black tape and placed in a box built in the lab using 2x4’s and ¼” plywood (Figs. 12 and 13). As the photomultiplier for the scintillator is extremely sensitive to light, the only light that can be allowed inside is the very low intensity light produced when the incoming muons penetrate the scintillator. This light produced by the muon needs to be isolated from all other light sources to ensure that the light observed and recorded is being produced from a muon interaction, and contamination from other light sources is excluded.

7

Figure 13 . Light tight scintillator. To avoid any possible damage by condensation from moisture in the air contained in the boxes, we wrapped each detector in two adult size Depends ™ (Fig. 14)

Figure 14 . Adult sized Depends™ were used to absorb any moisture that enters the box. Additionally, to reduce the volume of air containing any moisture, extra space in the box was filled with dry sand (Fig. 15). The sand was also useful in leveling the detectors and restricting their movement inside the box.

Figure 15 . Adding sand, several tests were conducted to make sure there was no additional

8

background radiation affecting the detectors from the sand. The boxes containing the PMTs were painted with Henry's Aluminum Paint™ to reflect as much solar energy as possible to reduce heating (Fig. 16).

Figure 16 . Painting the boxes with Henry’s Aluminum Paint TM to reflect solar radiation. Anti-static PVC pipe was connected from the detectors, through the box, and to the power control box which houses the DAQ board, a rechargeable battery, and the Raspberry Pi. Three D printed adapters 3 were designed to reduce the size of the PVC pipe coming out of the box. A rechargeable battery powered by a solar panel provides power to the scintillators, while the Raspberry Pi is powered using a power-over-ethernet (POE) switch and ethernet cable running to the roof.

Figure 17 . Two of the four detectors in light tight/waterproof boxes with adapters attached. 6. Detecting Cosmic Ray Showers For the first attempt at detecting cosmic ray showers, detectors 1 and 2 were stacked together and detectors 3 and 4 were stacked together, with a separation of 15 ft between each stack. Data was collected for the month of April (Fig 18). Each data set represents about 12 hours of data to ensure that files were a manageable size and to protect against detector failure. Settings are shown in Fig. 19. The event gate width is set to 100 ns and the detector coincidence is 1 since we only have the one detector. The channel coincidence is set at 2 meaning that a “count” requires at least two detectors to fire within 100 ns. Hit coincidence, the last field, gives the minimum number of hits on any detector within the 100 ns. We arbitrarily set this to 8 for our analysis.

9

Figure 18 . Datasets for the month of April selected for shower study.

Figure 19 . Settings for shower studies. Note here that Hit Coincidence is set for 2, we normally ask for 8. The results of our analysis of the data provided many instances of two events of 8 or more hit coincidences so we further clarified our criteria to be at least three events or more. This reduced our candidate pool to three sets of data with 3 simultaneous events, but only two that had 8 or more hit coincidences April 17th and April 19th.

10

Figure 20 . Notice 14 hits, 12 hits, 10 hits, etc all in a short span of time. These days show several cosmic rays being detected by both stacks within the same 100 nanoseconds, potentially registering as three separate events. A plot of the April 19th event with 14 hit coincidences appears to lend support for calling this a shower.

Figure 21 . Analysis of a potential shower based on our current criteria. 7. Footnotes

(1) Cherenkov radiation: Cherenkov radiation is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a medium at a speed greater than the phase velocity of light in that medium.” “The radiation is named after the Soviet scientist Pavel Cherenkov, the 1958 Nobel Prize winner, who was the first to detect it experimentally under the supervision of Sergey Vavilov at the Lebedev Institute in 1934.” (2) Bremsstrahlung radiation: Bremsstrahlung, from bremsen "to brake" and Strahlung "radiation"; i.e., "braking radiation" or "deceleration radiation", is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus.” (3) CAD for adapters is on WISRD server \\192.168.168.40\assets\3dprint\3dpring\jeremyr

11

8. Bibliography Blackbody Radiation. 15 Aug. 2020, https://chem.libretexts.org/@go/page/1677. Huege, T. Radio detection of cosmic ray air showers in the digital era, Physics Reports, Volume 620, pages 1-52, 2016 Knoff, E.N. and Peterson, R.S. Plateauing Cosmic Ray Detectors to Achieve Optimum Operation Voltage. United States: N.p., 2008. Web. 9. Acknowledgements The authors would like to thank QuarkNet for the use of the cosmic ray detector provided to WISRD for this work. They would also like to thank Wildwood School for its support of the Wildwood Institute for STEM Research and Development.

[Intentionally Left Blank]

12

Detecting Occultations of Trans-Neptunian Objects (TNO’s) with Light Curve Graphs 1 R. Allenstein, J.A. Wise

1. Introduction Creating light curve graphs requires specific programming that is hard to find in most software. The RECON lab uses a QHYCCD174 m-gps camera that records its data in “.fits ” files which are also incompatible with some software. This leaves a very small amount of software that have the capability and are compatible with the system WISRD uses. The raw data is sent to the Southwest Research Institute, (SWRI) to analyze, but WISRD is responsible for pre-screening the data to see if we’ve captured the occultation. We use PyMovie version 3.3.1 and Pyote version 4.2.1 to do this.

Figure 1. Example of an occultation of the Jupiter Trojan Patroclus captured by WISRD on May 9, 2021, made using SAOimageDS9. RECON is a collection of telescopes along a longitudinal line in the western United States from the Mexican border north into Canada. Telescopes are spaced about every 200 - 250 miles apart. These telescopes are outfitted to detect and record occultations of Trans-Neptunian Objects (TNOs) under a grant from the National Science Foundation (NSF). The orbits of these small TNO objects (as small as 100 - 200 km in diameter are often not well known so that the location for observing the occultation might be anywhere along a 2000-mile longitudinal line. For a RECON event only 3 or 4 of all the telescopes may observe the actual occultation. The data from all the telescopes is put into a computer program that computes possible shapes for the object based on data. Multiple events for the same object further constrain the shape. TNOs are Kuiper Belt objects whose orbit around the Sun is sometimes within the orbit of Neptune and at other times outside the orbit of Neptune (Pluto is the most famous TNO). These objects have had few if any interactions with other Solar System bodies and are thus the oldest, pristine remnants of the earliest epoch of our Solar System. The data WISRD and the RECON network provide helps constrain the object’s orbit, its size and in some instances its shape. With an accurate size, the amount of reflected light can be used to determine the albedo of the object and

1 Sponsored by the Wildwood Institute for STEM Research and Development (WISRD) and the Southwest Research Institute (SWRI) with funding from the National Science Foundation.

13

consequently an idea of its surface composition. This information furthers our knowledge of the origins of our solar system. 1.1 SAOimageDS9 The first software that was installed to make light curve graphs was SAOimageDS9. SAOimageDS9 was able to take “.fits” images and use image contrast control to brighten them. However, the software could not layer the images or view multiple images at once. The “.gif” made using this was created by manually contrasting multiple images and turning the “.fits” files into “.png” files then finally putting them into a “.gif” creator. SAOimageDS9 was also incapable of creating light curves without heavy augmentation. Most of the time spent trying to create light curves was spent trying to download additional resources to create light curves. This was eventually stopped when PyMovie was introduced. 1.2 PyMovie The analysis tool being used currently is PyMovie which is used by the wider RECON community. This method was presented to the Wildwood RECON team during the Eurybates Campaign in North Las Vegas. During the Campaign the leaders of RECON Dr. Marc Bouie and Dr. John Keller gave a demonstration of the PyMovie software with members of the development community who assisted them in their explanation. PyMovie is able to take whole folders of “.fits” files and other types and load them into a viewer. It is also able to create light curves and a file for checking the validity of the occultation. 1.3 Pyote During the introduction of PyMovie a second software program, Pyote, was introduced. Pyote uses the histograms produced by PyMovie to analyze the time and light of the occultation to verify whether it was an occultation or an anomaly. Pyote was explained by Dr. Marc Bouie and Dr. John Keller after their explanation of PyMovie. 2. How to use PyMovie After an event find where the “.fits” files are stored on the device /server. If the files are on a server drag them to the desktop of a computer that has PyMovie installed. Make sure that the “.fits” files are all in one folder as that is necessary for the next step. Open PyMovie and wait for the program to load.

14

Figure 2. PyMovie after being opened. The Red circle highlights the “Select FITS folder” option. After the Program has loaded, click “Select FITS folder”. Refer to the circled button in Figure 2 for the location of the button. This option will load all of the “.fits” files into a slideshow on the right side of the screen. Below the “.fits” images is a box titled “current frame” and “stop frame”. The “stop frame” box shows how many “.fits” frames are loaded. Make sure that all the “.fits” files from the folder have loaded. If there are any missing it could lead to problems with finding the occultation.

Figure 3. PyMovie screen showing an unedited “.fits” image

15

2.1 Contrast Controls When a “.fits” is first opened in PyMovie the screen will display a black image. At the top of the window there is a small bar of noise. This is useful to tell if a “.fits” image has been loaded properly. This is also useful to tell if an image has been loaded since the noise will appear in the screen.

Figure 4. PyMovie with the editing option circled to edit the contrast of the “.fits” images. When the “.fits” image is first loaded it shows a completely black screen with a small bar of noise. To observe the occultation and the starfield the contrast needs to be changed. To change the image contrast click on the small check box next to the screen. This will bring up the image contrast controls on the right side of the screen. Since PyMovie can’t be resized, it may appear outside of the screen to the right. Dragging the software to the left should bring the controls into view.

Figure 5. PyMovie with the Contrast changed so that the stars are visible using the yellow bars.

16

In the image contrast controls on the top and bottom of the blue column are yellow triangles. These control how much light is being looked for in the image. Within the blue column is a dark blue triangle, this is all of the stars in the image. To start, choose a yellow triangle and begin dragging it. Drag the yellow triangle to the closest edge of the triangle i.e.; if dragging the top triangle pull it to the top of the dark blue triangle or if dragging bottom yellow triangle pull it to the bottom of the dark blue triangle. The yellow triangles should be touching the dark blue triangle. The contrast is important because it makes the stars more visible against the background of space. This software works best when the stars are clear and are a different color than the background.

Figure 6. The contrast is further edited by using the black triangle on the side of the screen. To further enhance the contrast between the stars and the sky the black column should be used. This is more based on intuition and what looks the best for that specific starfield. Usually, the best range of light is somewhere near the middle of the column. After this step the starfield is very visible and the bright stars are very contrasted against the sky. After this step make sure to click the image contrast control and uncheck the box. This saves the image contrast control settings. If the following steps aren’t followed it will undo the image contrast and revert it to a black screen.

Figure 7. The “finder” tab is circled in red.

17

2.2 Getting Rid of the Top Static Switch to the “finder” tab circled in Figure 7 . and next to top in the redact section enter the number 10 (see figure 8). The finder tab can redact parts of the “.fits” images and make the contrast better since it can get rid of white pixels which add overall brightness to the contrast control. The top 10 pixels in the “.fits” images are pure white which can throw off the contrast control. Figure 8 . highlights the redact section and the generate button. In the “num frames” section put in however many images are loaded into PyMovie. “Num frames” is between redact top and Generate “Finder”. After both steps click the generate “finder” button and it will run through all the specified images and cut out the top 10 frames. After it finishes the images should look better. Optionally also go to the “Misc.” tab. It is in the same area as the “finder” tab. The first option is called “apply ‘line noise’ median filter”. Turn this on and it will remove the noise lines from the images making it easier to distinguish stars.

Figure 8. The redact section is circled in red. Generate “Finder” is circled in Green

18

Figure 9. Snap-to-blob aperture is the tool used to highlight a star

2.3 Selecting Target Star and Reference Stars Once the stars are as contrasted as possible against the background the software can begin analyzing the data set. First locate the target star within the image. If the camera's tracking worked and it wasn’t windy the telescope should have stayed locked on the star. If it wasn’t then the star selection process won’t follow the star as it moves, and the software won’t capture the full event. This is because the software will run through all the frames and as the telescope moves the stars will slowly shift out of the selected areas. To begin selecting stars, right click over the target star and select “Add snap-to-blob aperture.” The red circle in Figure 9 . shows where to find this option. After selecting “Add snap-to-blob aperture” a small green box should appear the same as Figure 10 .

[Intentionally Left Blank]

19

Figure 10. The target star circled with the “Add snap-to-blob aperture” option

Figure 11. The jogging options are circled in green. The recoloring option is circled in red.

20

After selecting a star make sure that the green box is centered over the star. If it is not, click move where the green box is by right clicking and selecting “enable jogging via arrow keys”. This makes the arrow keys on the computer move where the box is. Move the box so that the crosshair is directly over the star. Not circled is the thumbnail below the image screen. These boxes show a detailed view of the inside of the selected area. They can be used for more precise movements. Once it is in the middle disable the arrow keys by selecting the “Disable jogging” option. Then turn the target star box red. This will help differentiate it and tell the software that it is the target. Rename the red box to “targetstar”. The rename function is located above the jogging options. Then choose 2 stars that are similar in brightness to the target star and follow the above instructions to highlight them. The 2 differences are that they should be named reference and should be colored yellow. Refer to Figure 12. and Figure 13. for how reference stars should look compared to the target star.

Figure 12. Creation of the 2 reference stars

21

Figure 13. How the full PyMovie should look after choosing the target star and the reference stars.

Figure 14. The analyze tool circled in red.

22

2.4 Analyzing Data and Creating Light Curve Graph Once the stars are selected click the analyze button circled in red in Figure 14. The analyze button will run through all “.fits” images that are loaded into the software. This is why loading all the “.fits” images is important. If some are missing the occultation could be shortened or missed entirely. Depending on the number of frames the analysis could take a while but under 200 shouldn’t take longer than 1-2 minutes.

Figure 15. The plot tool is circled in red. Once the software has finished analyzing the frames click on the plot button circled in red in Figure 15. This will pop up multiple windows with graphs showing the star’s light. These are the light curve graphs. A good way to quickly tell if an occultation occurred is to look at the composite data which shows the target star’s brightness as well as the reference stars. If a potential occultation occurred, then the target star brightness will drop substantially, and the reference stars won’t see any change (See Figure 17 for an example). This is not a foolproof method which is why the data must then be put into Pyote. Pyote uses “.csv” files which can be created with PyMovie. Click the “write csv” button which is circled in red in Figure 16. to write a csv or histogram file. Save the file in the same folder as all the “.fits” files.

23

Figure 16. Button to create a csv file is circled in red 3. How to Use Pyote

To start, open Pyote and click the “Read light curve” button. The button is circled in Figure 17. This will open a finder window to find the histogram/csv file. Find the file and open it in Pyote. Once the file is imported refer to Figure 18. for how it should look.

Figure 17. The read light curve button is circled to import the csv file from PyMovie.

24

Figure 18. A properly imported csv file.

[Intentionally Left Blank]

25

26 Figure 19. The Mark D region is circled in green, and the Mark R region button is circled in red. On the graph in the box to the right find the large drop. Mark 2 points where the drop begins as seen in Figure 20. Click the Mark D region that is circled in green in Figure 19. This marks where the beginning of the event occurs. Mirror the placement on the other side of the event to show the ending of the event. Once both end points are selected use the Mark R region button to create the R region. The graph should look like Figure 21. with a green bar in the beginning of the event and a red bar at the end of the graph.

Figure 20. Example of where to mark the D region

Figure 21. How Pyote should look after selecting both regions with the “Find event, then” option circled in red and the “write report” button circled in green

27

Click “Find event, then” which will analyze the data. Once it is finished the “write report” button will make itself clickable. Click it to make a full report of the occultation. This is the readable data that shows if an occultation occurred. Figure 21. has both “find event, then” and “write report” circled in red and green respectively. Once Pyote has written a report it will open a new window containing the report. Find the page containing the images in Figure 22. As explained in the figure, if the red line is to the right of the black line, then that means that it is not a false positive and is an actual occultation.

Figure 22. Example of the report created by Pyote and what a confirmed occultation looks like.

4. Acknowledgements I would like to thank J.A. Wise (WISRD COO) for introducing me to the RECON project leads, Dr.’s Marc Buie, and John Keller. Their presentation on how to use PyMovie and PyOte were critical to this paper. Finally, WISRD’s RECON Research Group’s Principal Investigator Ian Norfolk for helping me gather the data used in the examples. 5. Citation Buie, Marc. “Reading and Writing in the Digital Era.” Eurybates Occultation Conference, 17-20 October 2021, CSN Planetarium Las Vegas, NV. Keynote Address.

28

Ex situ coral growth: Using of artificial intelligence for quantifying growth of small polyp stony (SPS) coral microfragments in a marine aquarium 2 L. Guiga, Dr. K. Griffs, J.A. Wise 1. Introduction Methods for measuring coral fragment growth over time must be identified to evaluate microfragmentation of coral as a viable technique for restoration of damaged reefs. We are investigating a method to measure microfragments grown in the lab in a 90-gallon saltwater tank. This research is part of a larger research group investigating whether the size of the initial coral fragment has an effect on the growth rate. (Figure 1.)

Figure 1 . The overarching question for the group research project is, “Does the initial fragment size affect coral growth rate?” To investigate and scale-up propagation of many different coral species through microfragmentation, investigators must have the means to determine if microfragmentation accelerates a particular species’ growth rate, and not all investigators have access to the most recent technology to assist with this kind of research, such as modern 3D scanners. Investigators at Mote Marine Lab Laboratory ( Koch, et al., 2021 ) surveyed methods that have been used to quantify coral growth and highlighted the benefit of 3D imaging. Table 1 from their paper has been included in Appendix A. We are investigating a 2D imaging analysis method, PlantCV (Plant Computer Vision), as a novel use case for this open software platform. PlantCV is described in Section 4. 1.1 Challenges/Considerations We identified the following as challenges to using image analysis: ● Manageable cost ● Camera: ○ Quality of camera and lens

2 Sponsored by the Wildwood Institute for STEM Research and Development (WISRD) and the Mariner Ocean Research Institute (MORI)

29

○ Focal point of lens ○ Camera rotations/orientation ○ Camera position

● Fragments:

○ Placement of fragments ○ Fragment rotation/movement ○ 3D fragment growth ○ Reference points ○ Other organisms in the tank: snails and algae ○ Removing the rack for cleaning and putting it back to its original position as well as the corals in theirs ● Light: ○ Changing the light color throughout the day and what time/wavelength looks/works the best/worst for image analysis. ● Coding: ○ Collecting metadata ○ Automating imaging at fixed times ○ Performing data transfer and automation of said data ○ Performing image analysis and automation of said process 2. Substrate for Growing Corals We began testing methods to record coral growth by attaching coral pieces, called fragments, to ceramic plugs and using various ready-to-go methods to support the plugs. 2.1 Ceramic Plugs A common practice in the commercial reefing industry is to adhere coral fragments using cyanoacrylate onto small, individual, T-shaped, ceramic plugs as shown in Figure 2. This allows for ease of commercial coral propagation, purchasing, and for controlled placement within the tank while minimizing disturbance to the coral.

Figure 2 . Recently cut coral fragment glued to the surface of a ceramic plug.

30

The top of the plug is approximately 6.4 mm in diameter with a stem that is approximately 31.8 mm long. The plugs are designed to fit into a plastic frag rack. Each plug was marked with a sharpie to keep the orientation facing the camera the same if the plug needed to be moved, or if it was moved by a wayfaring snail (Figure 3) or a water current and needed to be repositioned. Snails are a necessary component of the tank to mitigate algal growth, so their presence must be accommodated.

Figure 3 . Snail moving over the rack surface (cleaning off algae) bumps into a fragment and alters its position. 2.2 Frag Racks Initially, fragment plugs were supported by placing them in plastic micro test tube racks that were on hand in the lab (Figure 4) as our first so called “frag rack”. The test tube racks were useful, but they were easily bumped, which resulted in a change of camera angle, and they were hard to level in the sand due to their large surface area. To solve this problem, we chose a large off-the-shelf egg carton rack for its ability to be secured at a fixed point within the tank. It was crucial that fragments retained their positions for meaningful analysis of growth.

Figure 4 . Left: Microtest tube rack Initially used to hold ceramic plugs. Right: Population of propagated Green Star Polyps held in an egg carton frag rack as they grow.

31

2.3 Rack Dimensions We maximized the size of the frag rack to allow it to “lock in” to the substrate in the tank most effectively, thereby minimizing any rotation or shifting. The rack lightly scratches the front and back surfaces of the tank glass as it is inserted, and its weight and open style design allow for it to stay in place. The large rack also allows for multiple pieces of coral frags to be monitored simultaneously. ● Frag rack: 61 cm wide x 44 cm deep ● Internal opening of a single square in the rack: 14.60 mm; external width of a single square: 17.73 mm; width of individual cross members: 1.75 mm. (Figure 5 below.). PVC pipe legs/feet: approximately 63.5 mm in diameter and 25.4 mm wide ● Mini zip ties: 3 x 100 mm long

Figure 5 . Dimensions of individual squares within the frag rack. Left: Internal dimension; Middle: External dimension; Right: Dimension of plastic cross members. 2.4 Rack Construction A 61 cm x 61 cm frag rack was cut to size using a band saw. The PVC legs of the rack were cut to size using a table saw. A miter saw was considered for the PVC, however, the fixed blade of the table saw allowed for more control of the PVC. The cut edges of the PVC were sanded with 100 grit and 400 grit sandpaper as smooth surfaces grow less algae than rough surfaces. Four PVC legs were attached to each corner of the frag rack and one PVC leg was attached in the center via zip ties. The PVC circles originated from a single blue PVC tube. These legs are necessary to elevate the crate off the sand for insertion of ceramic plugs, water circulation, and to level the platform. Key steps in rack construction are shown in Figure 6.

32

Figure 6 . Top Left: Trimming the egg crate frag rack to fit the width of tank. Top Right: Cutting legs for frag rack. Bottom Left: Close-up of leg and rack. Bottom Right: Zip tie attachment. 2.5 Fragment Position Identification

Because the rack will need to be periodically removed from the tank for cleaning, a coding system was developed to ensure the fragments are returned to the same position for imaging. Figure 7 shows the placement of research fragments into the rack.

[Intentionally Left Blank]

33

Figure 7 . Positioning fragments in the tank. The coding system consists of a two-letter designation indicating the species; the .1 at the end of the designation indicates it is the parent colony, the .2 and higher indicates fragments cut from that parent as shown in Figure 8. The full coding system is provided in Appendix B: Placement of Fragments on Frag Rack.

Figure 8 . Coding system for fragment placement.

34

2.6 Rulers To manually measure differences in the growth of the coral, we attached physical rulers to the rack. Unfortunately, it was difficult to easily distinguish the contrast between the white ruler, which looks blue under the tank lights, and light-colored coral fragments. The solution was to trim the ruler to minimize the white so all that was visible were the millimeter markings as shown in Figure 9. We used a 45W CO 2 laser to trim the length of the ruler as close to the measurement lines as possible. The narrower rulers also fit within the open spaces of the frag rack and could be epoxied in place against the inside face of a square within the rack.

Figure 9 . Rulers cut lengthwise to make markings more visible. Five rulers were affixed, and the fragments were clustered around them to fit within the width of a photograph taken by a Raspberry Pi at a distance of approximately 18 cm from the glass of the tank. (Figure 10.) The positioning of the camera is discussed in the next section.

Figure 10 . Five rulers spaced out across the Pi camera’s field of view.

35

3. Raspberry Pi The Raspberry Pi was chosen because of its size, capabilities, and low cost. 3.1 Specs of the Raspberry Pi 4b ● Processor: Quad core Cortex-A72(ARM 8) 64-bit SoC at 1.5GHz turbo boost up to 2GHz ● Ram: 8GB LPDDR4-3200 SDRAM

● Wifi: 2.4 GHz and 5.0 GHz IEEE 802.11ac wireless, ● Bluetooth: 5.0 BLE with the option of gigabit ethernet ● USB Ports: Two USB 3.0 ports and two USB 2.0 ports ● Pi standard 40 pin GPIO header ● Two micro-HDMI ports which support up to 4kp60 ● 2-lane MIPI camera ports ● H.264 (4kp60 decode), H.264 (1080p60 decode, 1080p30 encode) ● OpenGL ES 3.1, Vulkan 1 ● Storage: Micro-SD card slot for loading operating system and data storage ● 5V DC via USB-C connector (at a minimum of 3A*)

● 5V DC via GPIO header (minimum at 3A*) ● Ambient operating temperature: 0-50 o C 3.2 Setting up the Raspberry Pi

A 128GB microSD card was purchased and the relevant software, installed on a laptop, was Raspberry Pi Imager (Figure 11), which configures the latest version of the Raspbian OS for the Pi on the SD card. After inserting the micro-SD card to the micro SD card reader on the Pi could boot up the system.

Figure 11 : Left: Picture of the Raspberry Pi Imager app. Right: Picture of the app open. Monitoring coral growth in the marine laboratory at MORI necessitated receiving open lines of communication to the Raspberry Pi. To set up communication with the Pi, a static IP address for the Pi was configured. Three methods were set up to access our microcomputer remotely: ssh, VNC, and FTP. For the VNC server, both internal and external IP addresses were configured to go to the Pi’s ip address at port 5900 that made the pi accessible on campus and from home. The

36

same internal and external IP addresses could be used for the FTP, and ssh access as long as port 22 was directed to the pi at the router. 3.3 Connecting Remotely The Raspberry Pi is a single board computer, meaning the circuit board that houses all the computer’s components is roughly the size of a credit card. A monitor (via micro HDMI), keyboard and mouse (both USB) can be connected to the Pi to allow for interfacing and data collection observations. These materials were supplied by WISRD and set up in the Mariner Marine Laboratory (Figure 12). The monitor, a Samsung Syncmaster 2233, connection had to be established through an VGA to HDMI adapter.

Figure 12 . Joe Wise, WISRD COO, supplying the Raspberry Pi, Camera, and Monitor. Remote access began by enabling both ssh (secure shell connection) and VNC in a terminal window using: sudo raspi-config

[Intentionally Left Blank]

37

A menu opens (Figure 13) and presents the options to enable/disable different capabilities of the Pi.

Figure 13 . Left: File raspi-config. Right: Enabling ssh. In the terminal window, one is asked to confirm that ssh is enabled: ssh pi@IP address (put in the internal or external IP address) If successful, next the user will be asked for the password for the Pi. If an internal IP address is used, it must be on its specified configured wifi network, while if an external IP address is used, a different wifi server is needed. To assign a fixed IP address to the PI, access the dhcpcd file with the command: sudo nano /etc/dhcpcd.conf Test the connection by running a picture command through ssh: raspistill -o Desktop/name.jpg and logging into VNC on a laptop (Figure 14) to view that picture.

Figure 14 . Left: Entering the ssh pi@someipaddress command. Right: after entering the password of the pi for the ssh connection.

38

For a VNC client go to https://www.realvnc.com/en/connect/download/viewer/macos/ and download the proper version (according to your operating system) of VNC Viewer. Once in the app, type the IP address in the search bar and press enter to begin communicating with the computer. 3.4 File System on the Pi The Pi’s preset file train was revised to accommodate the long-term nature of our coral restoration research. We set up an initial system to sort photos for a quick query of the data as shown in Figure 15. A longer-term, more robust method to search images is described further in Section 3.10.3.

Figure 15 . File train for the automated organization system.

3.5 Camera and Lens The camera used was chosen as it has enough resolution to process the difference between a ruler and a coral fragment. The Raspberry Pi Camera version 2.1 (Figure 16) is an 8 mp camera with video capabilities in 1080p at 30 fps, 720p at 60 fps, and 640 x 480p at 60/90 fps. The Pi camera also offers a V4L2 driver, which provides a standard API on top of the firmware-based camera system for Linux integration. The resolution of the photos goes up to 3280 x 2464 pixels on the command line. The camera was purchased from Cana Kits.

39

Figure 16. Raspberry Pi camera version 2.1 with standard lens.

3.5.1 Lens Focusing

Out of the box, the camera focus is set at infinity, but it does have a manual focus. Unfortunately, the camera lens is very small, which made focusing the camera on the rulers a tedious task with having to repeatedly take pictures and make small adjustments back and forth, as shown in Figure 17.

Figure 17 . Pre and post lens focusing. Left: Focused at infinity. Right: Properly focused at a distance of 7 inches from the glass.

40

3.5.2 Lens Distortio n A limitation of the Raspberry Pi camera is that the field of view becomes distorted and the straight rulers appear to bend in photographs, as seen in Figure 18, when the camera is not completely parallel to the glass due to light changing speed as it moves through different media (in this case, class, water, and air).

Figure 18 . Lens distortion from the angle of the camera in relation to the glass causing the rulers to appear bent/warped. 3.6 Designing the Camera Mount for the Raspberry Pi A black case was purchased to house the Raspberry Pi. However, the case does not provide an attachment for the camera and a permanent mount needed to be created. We modified the existing case by designing an insert to fit between the bottom portion of the case and the lid (Figures 19- 21). The insert has a “gooseneck” extension to which the camera and the camera ribbon are attached. This modification also provides more room for the fan to circulate air. The camera mount was designed in the CAD software Onshape and printed in PLA on a FlashForge Creator Pro 2 3D printer.

[Intentionally Left Blank]

41

Figure 19 . Left: Rough sketch of the initial idea. Right: The finished version of the “elevation piece.”

Figure 20 . Designing a camera mount to interface with the purchased Raspberry Pi case.

[Intentionally Left Blank]

42

Figure 21 . First Camera Mount. Pieces fit on top of each other when assembled. 3.7 Development of Laser Assisted Guide (LAG)

As identified in the Introduction, being able to line up the camera precisely is crucial. If the Pi needed to be moved to be worked on, or if the frag rack needed to be removed for cleaning, all pieces must be returned to the precise location that was established to ensure the camera angle does not change. To address this issue, we developed the Laser Assisted Guide (LAG) (Figure 22).

Figure 22 . The LAG in operation. A green laser is positioned above the camera lens and hits a defined target on the middle ruler.

43

3.7.1 LAG Design The initial concept of the LAG is shown in Figure 23. In addition to the basic concept of shining a laser to line up the camera, we added a safety feature to the laser circuit, assembled the components, and did performance testing as shown in Figure 24. The components are: ● 532 nm 5mw green laser pointer, which was originally purchased in Rome, Italy in 2014 in the ruins of the old Roman Empire ● Rayovac High Energy 9V battery

● 20A 12v 3 terminal switch ● Tactile momentary switch ● T type 9v battery connector ● 20 AWG wires

Figure 23 . Initial “back-of-the-envelope” concept drawing of the LAG.

Figure 24 . Testing the components of the Laser Assisted Guide. Once we had determined the dimensions of the components, a formalized design was drafted in an isometric drawing, which is shown in Figure 25.

44

Figure 25 . Isometric drawing of the LAG. Side view of the LAG before finalization. Finished LAG. Finished LAG with a circuit cover to protect against random falling objects/liquids and salt. We designed the LAG in Onshape and printed it in PLA on the FlashForge Creator Pro 2 3D printer. When printing the first version of the LAG, the automatic supports on the CAD slicer did not consider its own weight and therefore put too many support beams coming off on a single beam. Since the program does not view the print plate as viable placement for supports, this singular beam was also at a sharp angle attached to the design. Too much mass on this one beam led to a failed print, which is shown in Figure 26.

45

Figure 26 . This is what happens when you do not understand the structural integrity of your support feature. Print time: 11.5 hours. Once the support issue was rectified, a successful print was completed, and components were installed as shown in Figures 27 and 28.

Figure 27 . Left: Front view of LAG showing scope above the “M”. The camera attaches in the groove below the “M”. Right: The left compartment holds a 9V battery when not in use; the back compartment houses the battery terminal connections; the central compartment holds the laser.

46

Figure 28 . Left: LAG shining the green laser through the twin alignment pin holes. Right: The LAG’s safety features. The LAG has an on/off switch and a “continuity” momentary switch. 3.8 Mounting Camera/Pi to the Tank To set up for the experiment, the Raspberry Pi with camera and Laser Assisted Guide were fixed atop an adjustable tripod stand. Initially the camera was propped-up using items that were available in the lab such as a lab stool, a cup, and painters’ tape, however, since those items were not strong enough to withstand long use, a tripod stand was purchased to replace them. The tripod stand allowed for rapid height adjustment and combatted the uneven ground eliminating any wobble or rotation (Figure. 29).

[Intentionally Left Blank]

47

Figure 29 . An adjustable tripod stand holds the camera in place. 3.9 Pi Temperature

Temperature checks of the Pi show that it is typically running at 50 o C. After about four weeks of use, the fan on the Pi did not appear to be pulling out as much air as it initially was. A decision was made to monitor the situation and not disturb the Pi until the end of this internship unless it was necessary. This high temperature has not affected our photo taking; however, higher temperatures may impede the Pi’s processing speeds, and as a result we will not be performing any image processing on the Pi, but rather using a MacBook instead to maintain data integrity. 3.10 Image Acquisition 3.10.1 Testing Image Capture Time of Day We took photos throughout the day over multiple days to determine the optimal time of day for the best color contrast in the imaging. The Kessil LED lights are programmed to change intensity and color throughout the day, which affects the quality of photos taken by the Pi camera. It was discovered that the primary investigator was color blind, making it a challenge to simply eyeball the pictures for dramatic change in color depending on the time of day, unless pointed out by

48

another lab member. Figure 30 shows notes of the features of photos taken at 4 times of the day. Figure 31 shows photos taken at the designated time of day. The 5:30 pm photos, which are taken after the tank lights turn off, were chosen for analysis because they have the best contrast in coloration.

Figure 30 . Evaluation of optimal image capture at various times during the day.

[Intentionally Left Blank]

49

Made with FlippingBook Online newsletter creator