Permanent URI for this collection
Browse
Recent Submissions
Publication Reservoir simulation of primary production in the Zenith Field, Stafford and Reno Counties in Kansas(University of Kansas, 1991-08-31) Adams, Kent A.The Zenith Field is a large reservoir with production coming from four formations, namely 1) Misener Limestone, 2) Misener Sandstone, 3) Maquoketa Dolomite, and 4) Viola Limestone. Initial oil in place was estimated to be 100 million barrels. Recovery from primary production was approximately 20 million barrels, with an additional one million barrels recovered through waterflooding. A large quantity of mobile oil is believed to still exist in the field. The purpose of this project was to simulate primary recovery in the Zenith Field using a mathematical reservoir model. Limited data were available for the simulation and included formation thicknesses, porosity measurements, a single PVT analysis, four core reports, and production history of the field. The complex geology and fluid flow in the field involved communication between formations, the presence of an aquifer, and indications of natural fractures in the carbonate formations. The long production history and the lack of data also added to the challenge of simulating the behavior of this field. Grids for each formation were input into the model along with available rock and fluid properties. Where no data were available, correlations were used. The best data were available for the Misener formations, and a match of actual field pressures, water cuts, and field gas-oil ratio was obtained for the initial two years of production from the Misener formations. It was discovered through the simulation that the Maquoketa and Viola contributed to production during these first two years, even though these formations had not been discovered at the time. A history match of the remaining primary production was not achieved due to the limited amount of data available. Data not available that would have helped in the simulation were more core reports, additional field tests, and production data (pressures, oil, gas, and water) on individual wells. It is recommended from this work that if an attempt to simulate a field as large as the Zenith Field is made in the future, there should be much more data than that which was available for the Zenith Field.Publication A comparison of sixteen classification strategies of rule induction from incomplete data using the MLEM2 algorithm(University of Kansas, 2020-05-31) Nelakurthi, Venkata Siva Pavan Kumar Kumar; Busse, Jerzy Grzymala; Wang, Guanghui; Kulkarni, PrasadIn data mining, rule induction is a process of extracting formal rules from decision tables, where the later are the tabulated observations, which typically consist of few attributes, i.e., independent variables and a decision, i.e., a dependent variable. Each tuple in the table is considered as a case, and there could be n number of cases for a table specifying each observation. The efficiency of the rule induction depends on how many cases are successfully characterized by the generated set of rules, i.e., ruleset. There are different rule induction algorithms, such as LEM1, LEM2, MLEM2. In the real world, datasets will be imperfect, inconsistent, and incomplete. MLEM2 is an efficient algorithm to deal with such sorts of data, but the quality of rule induction largely depends on the chosen classification strategy. We tried to compare the 16 classification strategies of rule induction using MLEM2 on incomplete data. For this, we implemented MLEM2 for inducing rulesets based on the selection of the type of approximation, i.e., singleton, subset or concept, and the value of alpha for calculating probabilistic approximations. A program called rule checker is used to calculate the error rate based on the classification strategy specified. To reduce the anomalies, we used ten-fold cross-validation to measure the error rate for each classification. Error rates for the above strategies are being calculated for different datasets, compared, and presented.Publication Spectral Properties of Phase Noises and the Impact on the Performance of Optical Interconnects(University of Kansas, 2019-12-31) AL-QADI, Mustafa Aladdin; Hui, Rongqing; Allen, Christopher; Perrins, Erik; Frost, Victor; Han, JieThe non-ending growth of data traffic resulting from the continuing emergence of Internet applications with high data-rate demands sets huge capacity requirements on optical interconnects and transport networks. This requires the adoption of optical communication technologies that can make the best possible use of the available bandwidths of electronic and electro-optic components to enable data transmission with high spectral efficiency (SE). Therefore, advanced modulation formats are required to be used in conjunction with energy-efficient and cost-effective transceiver schemes, especially for medium- and short-reach applications. Important challenges facing these goals are the stringent requirements on the characteristics of optical components comprising these systems, especially laser sources. Laser phase noise is one of the most important performance-limiting factors in systems with high spectral efficiency. In this research work, we study the effects of the spectral characteristics of laser phase noise on the characterization of lasers and their impact on the performance of digital coherent and self-coherent optical communication schemes. The results of this study show that the commonly-used metric to estimate the impact of laser phase noise on the performance, laser linewidth, is not reliable for all types of lasers. Instead, we propose a Lorentzian-equivalent linewidth as a general characterization parameter for laser phase noise to assess phase noise-related system performance. Practical aspects of determining the proposed parameter are also studied and its accuracy is validated by both numerical and experimental demonstrations. Furthermore, we study the phase noises in quantum-dot mode-locked lasers (QD-MLLs) and assess the feasibility of employing these devices in coherent applications at relatively low symbol rates with high SE. A novel multi-heterodyne scheme for characterizing the phase noise of laser frequency comb sources is also proposed and validated by experimental results with the QD-MLL. This proposed scheme is capable of measuring the differential phase noise between multiple spectral lines instantaneously by a single measurement. Moreover, we also propose an energy-efficient and cost-effective transmission scheme based on direct detection of field-modulated optical signals with advanced modulation formats, allowing for higher SE compared to the current pulse-amplitude modulation schemes. The proposed system combines the Kramers-Kronig self-coherent receiver technique, with the use of QD-MLLs, to transmit multi-channel optical signals using a single diode laser source without the use of the additional RF or optical components required by traditional techniques. Semi-numerical simulations based on experimentally captured waveforms from practical lasers show that the proposed system can be used even for metro scale applications. Finally, we study the properties of phase and intensity noise changes in unmodulated optical signals passing through saturated semiconductor optical amplifiers for intensity noise reduction. We report, for the first time, on the effect of phase noise enhancement that cannot be assessed or observed by traditional linewidth measurements. We demonstrate the impact of this phase noise enhancement on coherent transmission performance by both semi-numerical simulations and experimental validation.Publication Determination of Clinical Efficacy of Ultrasound Stimulation on Piezoelectric Composites for Power Generation Applications(University of Kansas, 2019-12-31) Alters, Morghan; Friis, Elizabeth; Luchies, Carl; D'Silva, LindaEach year in the United States there are over one million fractures that occur [1]. When a bone breaks, regardless of the severity, it is referred to as a fracture. Individuals suffering from a fracture typically undergo some form of clinical intervention ranging from a splint to orthopedic surgery to increase the body’s natural ability to fuse the bone. This does not always occur, and some fractures never fuse resulting in a non-union. Bone stimulating adjunct therapies are available, to aid in bone healing, but often require patient compliance or battery packs, but these therapies have restrictions too such as the limited lifespan of batteries. Self-powered generators comprised of piezoelectric composite materials have shown promising results in bone stimulation applications under physiological loading conditions. To increase the effectiveness of these materials in clinical applications the use of ultrasound loading was investigated for use when physiological loading is not possible. Twelve piezoelectric composite specimens (n=6 for both 0.0 mm and 0.8 mm CLACS groups) were manufactured using three stacked, lead zirconate titanate (PZT) discs, wired with the intent to be connected in parallel, encapsulated with medical grade epoxy. The effect of ultrasound intensity (0.1, 0.5, and 1.0 W/cm2 ), compliant layer thickness (0.0 mm and 0.8 mm), and ultrasound application angle (0°, 45°, and 90°) on power generation was investigated for all specimens. An increase in ultrasound intensity resulted in an increase in power production for all specimens. At an application angle of 0° the 0.8 mm CLACS group produced more power than the 0.0 mm group, but a reversed trend was observed at angles of 45° and 90°. Lastly, when compared to 0°, the power output of both specimen groups decreased significantly in the 45° and 90° conditions at all intensities. This study demonstrations that ultrasound is a viable option for stimulating PZT composites intended for power generating applications where physiological loading is not applicable.Publication Evaluating the Proliferation and Pervasiveness of Leaking Sensitive Data in the Secure Shell Protocol and in Internet Protocol Camera Frameworks(University of Kansas, 2019-12-31) Andrews, Ron; Bardas, Alexandru G; Li, Fengjun; Luo, BoIn George Orwell’s `nineteen eighty-four: A novel', there is fear regarding what “Big Brother”, knows due to the fact that even thoughts could be “heard”. Though we are not quite to this point, it should concern us all in what data we are transferring, both intentionally and unintentionally, and whether or not that data is being “leaked”. In this work, we consider the evolving landscape of IoT devices and the threat posed by the pervasive botnets that have been forming over the last several years. We look at two specific cases in this work. One being the practical application of a botnet system actively executing a Man in the Middle Attack against SSH, and the other leveraging the same paradigm as a case of eavesdropping on Internet Protocol (IP) cameras. For the latter case, we construct a web portal for interrogating IP cameras directly for information that they may be exposing.Publication Geochemical Feasibility of Brine Exchange Between Arbuckle and Lansing-Kansas City Formations as a Produced Water Management Alternative(University of Kansas, 2019-12-31) Barimah, Richard; Peltier, Edward F; Randtke, Stephen J; Barati Ghahforikhi, RezaThe State of Kansas is facing increasing demands on limited available drinking water sources. Reclaiming and reusing wastewaters for water applications able to utilize lower quality water sources could help to extend and conserve existing drinking water sources. Oil production in Kansas generates over one billion barrels of produced water each year, which must be properly managed in compliance with environmental regulations. Successful reclamation of this wastewater for industrial and other uses could reduce freshwater requirements in the oil industry and provide a new water source for other water needs of the state. Produced waters from Kansas oil fields are usually very salty with a median total dissolved solids (TDS) concentration of 90,000 mg/l. However, there are a few oil formations in the Central Kansas Uplift area that have less salty water with a TDS of 40,000 mg/l or lower. Exchange of formation brine, where the highly salty brine from one formation is injected into the lower salinity formation, along with extraction of the lower salinity formation water to balance formation pressure, has been proposed as a way of managing produced water for potential reuse applications. This study was conducted to further investigate the feasibility of brine exchange as a produced water management practice from a geochemical and environmental standpoint. A geochemical software program (PHREEQC) was utilized, along with both the PHREEQC and Pitzer databases, to predict precipitation reactions that might occur during the brine exchange. These predictions were compared to laboratory results to determine the limits and degree of accuracy of the model. Adverse reactions such as precipitation of solids in the formation during the exchange could block the pore spaces and reduce the conductivity of the formation. This study established that mixing Lansing Kanas City formation brine with Arbuckle formation brine (from the Wellington wellfield in Kansas) could potentially cause calcium carbonate scale formation, and that cutting down the bicarbonate content is essential to prevent scaling. Also, using both the Pitzer and PHREEQC databases, PHREEQC accurately predicted the amount of carbonate scale formed at well oversaturated conditions, i.e., having a saturation index (SI) 2, within the range of ionic strength investigated, i.e., 1 M to 3.65 M. However, the model is likely to overestimate the amount of scale formation at close to saturation conditions, for SIs between -0.5 and 2, for the mineral phases barite, celestite, gypsum, and anhydrite. For SIs in this range, both databases are likely to predict similar SIs for the sulfate minerals; but for the carbonate mineral phases, the predicted SIs from the Pitzer database are higher. This is due to the higher predicted activity for the carbonate ion, as the Pitzer database does not consider ion pairing or complexation, and more specifically the formation of NaHCO3+. pH predictions from both databases closely agree with the each other but failed to accurately predict measured pH values in lab experiments. This is most likely due to interference with pH measurement due to the high sodium ion concentration.Publication Characterization of MSY Nanodiamonds as a Nanoparticulate Adjuvant for RiVax Vaccine(University of Kansas, 2019-12-31) Brachtenbach, Allison Jo; Forrest, Laird; Middaugh, Russ; Dekosky, BrandonVaccines are weakened or mutated versions of the pathogens that invoke an immune response through controlled and targeted delivery. Live-attenuated and inactivated vaccines invoke an immediate and protective immune response but have higher risks in a subset of patients. Subunit vaccines are an antigenic part of the pathogen that can be paired with an adjuvant to invoke an effective immune response and cause less adverse reactions. Adjuvants are made of a wide variety of materials that aid in antigenic expression in the body, in hopes of providing protection. Nanodiamonds have a 3D carbon structure that has highly tailorable surface chemistry to provide customization for interaction of antigens that would mimic an effect similar to a virus-like particle. Here, the synthesis of modified nanodiamonds is further developed by chemical modifications into three derivatives: oxidized, acidified, and amine modified nanodiamonds. Each derivative was compositionally and morphologically characterized to understand their stability and binding capacity. The unmodified nanodiamonds were selected to be further characterized but with a vaccine called RiVax. RiVax is a mutated ricin protein that prevents ricin toxicity for up to four months, but greater longevity and immunity is desired. In in vivo release studies, RiVax adsorbed to the unmodified nanodiamonds had recognition of adsorbed ricin and significantly low recognition of soluble ricin, leading to a survival rate of 50% in mice. Overall, nanodiamonds did not improve the RiVax vaccine but has shown respectable alterability, which could be beneficial to other subunit vaccines.Publication Development of a Multichannel Wideband Radar Demonstrator(University of Kansas, 2019-12-31) Carr, Kevin; Leuschen, Carl J; Rodriguez-Morales, Fernando; Stiles, James MWith the rise of software defined radios (SDR) and the trend towards integrating more RF components into MMICs the cost and complexity of multichannel radar develop- ment has gone down. High-speed RF data converters have seen continuous increases in both sampling rate and resolution, further rendering a growing subset of components in an RF chain unnecessary. A recent development in this trend is the Xilinx RF- SoC, which integrates multiple high speed data converters into the same package as an FPGA. The Center for Remote Sensing of Ice Sheets (CReSIS) is regularly upgrading its suite of sensor platforms spanning from HF depth sounders to Ka band altimeters. A radar platform was developed around the RFSoC to demonstrate the capabilities of the chip when acting as a digital backend and evaluate its role in future radar designs at CReSIS. A new ultra-wideband (UWB) FMCW RF frontend was designed that con- sists of multiple transmit and receive modules with a 6 GHz bandwidth centered at 5 GHz. An antenna array was constructed out of Vivaldi elements to validate radar system performance. Firmware developed for the RFSoC enables radar features such as beam forming, frequency notching, dynamic stretch processing, and variable gain correction. The feature set presented here may prove useful in future sensor platforms used for the remote sensing of snow, soil moisture, or crop canopies.Publication Interaction of Biomolecules at the Air-Water Interface: Evaluating the Role of Lipid Composition when Interacting with Lung Surfactant Proteins and Engineered Carbon Nanodiamonds(University of Kansas, 2019-12-31) Chakraborty, Aishik; Dhar, Prajnaparamita; Gehrke, Stevin Henry; Leonard, Kevin Charles; Paul, Arghya; Kwon, GibumLung surfactants (LSs) are a complex mixture of lipids and proteins that are found in the alveolar lining of the lungs. Their primary objective lies in lowering the surface tension of the aqueous layer on which they reside. By doing so, LSs reduce the energy involved in breathing, and any loss/ dysfunction of the surfactants can cause fatal respiratory complications. Successful treatment methods require a thorough understanding of the biophysical properties of the LSs, and their interaction with any material that may come in contact. This dissertation aims at evaluating the interaction of the different lipids found in the surfactant pool with such plausible candidates at the air-water interface. Engineered carbon nanodiamonds (ECNs) is selected because of their potential in becoming a candidate for drug delivery through the respiratory tract. Therefore, it is necessary to evaluate any possible toxic outcome from ECNs. Here, we observe that both the lipid headgroup charge and the tail saturation impact the biophysical properties of the monolayer. We also evaluate the impact of the protein, Mini-B, which is a synthetic analog of the native surfactant protein, SP-B, on the biophysical properties of the LSs. Mini-B is a suitable candidate for surfactant replacement therapy (SRT), which is associated with lung diseases. Thus, Mini-B needs a thorough biophysical analysis. Lastly, we observe the effectiveness of Mini-B in countering the deleterious effects of cholesterol. Cholesterol is found in the native mixture and helps in fluidizing the monolayer. However, cholesterol has been reported to have some harmful impact on the LSs. Thus, it is a highly disputed component in SRT, with some formulations removing cholesterol from their product. We observe that 1 to 5 wt.% of Mini-B can counter the harmful effects of small quantities of cholesterol, providing a wholesome mixture.Publication Battery Management and Battery Modeling Considerations for Application in a Neighborhood Electric Vehicle(University of Kansas, 2019-12-31) Choate, Matthew; Depcik, Christopher; Fang, Huazhen; Liu, LinTransitioning from internal combustion engine vehicles (ICEVs) to electric vehicles (EVs) consolidates and relocates emissions, endeavoring to improve air quality, particularly in high traffic urban areas. Unfortunately, many obstacles to widespread EV use remain, broadly related to user familiarity, convenience, and effectiveness. However, EVs are better suited for some opportunities. Following the introduction, this thesis covers the process of upgrading a neighborhood electric vehicle (NEV) from lead-acid batteries to a swappable battery pack consisting of lithium iron phosphate (LiFePO4), or LFP, cells. Although LFP cells are considered safer than other lithium-ion cells, a new battery charger and battery management system (BMS) were installed to ensure proper function and maintenance. While the new electronics appeared to be successfully integrated during initial testing, several cells within the battery pack were over-discharged—or underwent voltage reversal—while outside during winter. Thus, prompted a reassessment of battery management practices and implementation, resulting in the construction of a new battery pack and redesign of the charge and discharge controls. The ensuing chapter pertains to battery management practices employed in the vehicle—and battery management in general. This chapter begins with background, wherein discusses fundamentals of cell function, modes of failure, and lastly, methods of obviating failure and protracting cell longevity. Finally, chapter four describes battery modeling from the perspective of a tool to maintain cells in EVs. Determination of immeasurable states that are important to battery management and consumer comfort are deliberated. Mathematical models and equivalent circuit models of cell behavior are of particular interest. Common equivalent circuit models are parameterized for several cells and voltage estimation capabilities are compared.Publication Incorporating CMIP5 Precipitation Projections into IDF Estimates for the Kansas City Area(University of Kansas, 2019-12-31) Crowl, Madison Elizabeth; Roundy, Joshua K; Peltier, Edward F; Young, BryanThere is building evidence that climate change is causing an intensification of precipitation patterns. Locations around the world can expect to experience more intense precipitation events. Engineers must be able to account for this future climatic uncertainty in their designs in order to develop sustainable and resilient systems. Intensity-Duration-Frequency (IDF) estimates, developed by the National Oceanic and Atmospheric Administration (NOAA), are used across the United States for engineering design. These estimates were developed under the assumption of a stationary climate (with respect to precipitation intensity). However, research has shown that this assumption may lead to the underestimation of extreme events. The objective of this study is to characterize projected changes in storm intensity, duration, and frequency for the Kansas City area, including: 1) Identifying precipitation trends and 2) Develop IDF estimates incorporating projected climate trends. To achieve these objectives, precipitation data was analyzed from six NOAA gages and the Coupled Model Intercomparison Project version 5 (CMIP5) ensemble. Annual and monthly precipitation was analyzed for both the observational gage data and CMIP5 model data using the Mann Kendall trend test. Increasing trends were identified in the winter (Dec.-Feb.) and spring (Mar.-May) months, while decreasing trends were identified for July-September, indicating a potential shift in seasonal precipitation patterns. Increasing trends were identified for annual precipitation for both the gage data and climate model data. Partial Duration Series (PDS) were developed for the six gages using the Peak-over-threshold (POT) method. Significant increasing trends were identified for the frequency of PDS events. A strong correlation was identified between PDS event frequency and annual precipitation. This relationship was used to develop a novel approach for incorporating climate model projections at the monthly scale into gage-based PDS events used to derive IDF curves. In this methodology, the PDS annual exceedance rate for the future time period was determined based on the CMIP5 model projected annual precipitation. IDF estimates incorporating projected climate model trends were then developed using the adjusted PDS data sets. Results showed an increase in event magnitude from the original for most durations and recurrence intervals, across all gages, with the 2-year and 100-year events increasing the most. The increase in the event magnitude has serious implications for engineering design. Critical infrastructure, such as bridges, roads, and overflow channels, that are designed for using stationary methods may be under designed.Publication Linking community capital measurements to building damage estimation for community resilience(University of Kansas, 2019-12-31) Daniel, Liba Achamma; Sutley, Elaina J.; Lequesne, Rémy D.; Tran, DanCommunity-level resilience has become an important consideration for city planners, policymakers, and other decision-makers, and therefore, it is increasingly investigated by engineering researchers. The robustness of the built environment and interconnectedness of the social system are important factors affecting community-level resilience that need further investigation. Recent research and studies show a need for the buildings to stay operational to preserve the quality of life after a disruptive event [Sattar et al. (2018)]. Disasters cause significant disruption to social institutions, the local economy, and overall quality of life due to damage to buildings and other civil infrastructures. Understanding the relationship amongst different community functions mainly between buildings and organizations, is a significant part of the motivation behind this research. There are seven types of capital inherent in a community: financial, political, social, human, cultural, natural, and built [Flora et al. (2008)]. This work advances the current state of knowledge in the relationship between buildings (a subset of built capital) and organizations throughout a community through a novel quantitative framework based on the seven community capitals. This thesis proposes a two-tiered approach, where one tier is performed at a community-level, and other tier is performed at a building-level and then integrated to measure post-disaster community capital losses. The community-level losses were measured using a novel scoring system based on keywords defining each community capital capturing changes in each community capital induced by building damage. The second tier measures building-level losses including number of damaged buildings as a proxy for built capital, dislocation rates for social capital, morbidity rates for human capital, accessibility changes for political capital, and repair costs for financial capital. The framework is exemplified on a virtual community, Centerville, under an earthquake scenario. Centerville is comprised of multiple building types with varying robustness [Ellingwood et al. (2016)]. Occupancies that are used to assemble the building inventory of Centerville include residential, commercial and industrial, as well as critical facilities such as hospitals, fire stations, schools, and government offices modeled using 16 building archetypes. The community is also comprised of a synthetic population with varied attributes linked to social vulnerability and resilience. The framework presented is hazard-generic, however for demonstration the hazard considered for this thesis is seismic and is adapted from Lin and Wang (2016). Disaster impact measurements are examined across the building portfolio for the earthquake scenario at different points in time to support comparisons. Although earthquake demand and some measures of community capital remain ill-defined, the proposed framework demonstrates the relative importance of including community capitals in loss estimation models to calculate community-level performance and resilience objectives. Resulting community capital measures, which aid community decision makers in either mitigation plans or as part of post-disaster response and recovery efforts post-disaster, are provided using a community capital ‘dashboard’. A dashboard presents trade-offs for supporting decision makers in understanding how changes to characteristics of the community can enhance or inhibit community resilience. Additionally, a dashboard enables the user to see the trade-offs across multiple criteria that influence community resilience, as opposed to a single measure that may be too vague for a decision maker to understand. The purpose of this work is to aid community decision makers in either mitigation plans or to aid in response and recovery efforts post-disaster through a holistic view of disaster impacts on their community.Publication A High Order Overset Flux Reconstruction Method for Dynamic Moving Grids(University of Kansas, 2019-12-31) Duan, Zhaowen; Wang, ZJ; Wang, ZJ; Farokhi, Saeed; Taghavi, Ray; Wu, Huixuan; Tu, XueminOverset meshes have a unique advantage in handling moving boundary problems as remeshing is often unnecessary. Recently, overset Cartesian and strand meshes were used successfully to compute complex flow over rotorcraft. Although it is quite straightforward to deploy high-order finite difference method on the Cartesian mesh, the near-body solver for the strand mesh is often limited to second order accuracy. In the present work of this dissertation, we develop a high-order FR/CPR solver, hpMusic, on both the near-body and background grids, and extend it to handle moving boundary problems. The solver is also extended to sliding meshes, which can be considered a special case of overset meshes. The use of sliding meshes can often simplify the treatment of moving boundary problems with simple translational and rotational motions. Two different approaches to handle the overset interfaces are evaluated for accuracy, efficiency and robustness. Accuracy studies are carried out and the designed order of accuracy is obtained for both inviscid and viscous flows. Steady and unsteady flow problems are solved on stationary overset meshes. The results agree well with those in the literature and from experiments. A turbine blade under the wake of moving cylinders is simulated using sliding meshes. The flow structures are compared with those without moving cylinders. The solver is then tested for moving overset meshes with a benchmark dynamic airfoil problem from the 4th International Workshop on High-Order CFD Methods. Hp-convergent results are obtained and compared with those from other groups. Finally flow over a hovering rotor is simulated to compare with experimental data. In this case, the present high-order solver is capable of generating and propagating tip vortices with high resolution. Good agreement is achieved with experimental data in tip vortex core size, location, and the swirl velocity at 3rd order accuracy.Publication W-Band FMCW Radar for Range Finding, Static Clutter Suppression & Moving Target Characterization(University of Kansas, 2019-12-31) Goodman, Levi Tanner; Allen, Christopher T; Blunt, Shannon D; Stiles, James MMany radar applications today require accurate, real-time, unambiguous measurement of target range and radial velocity. Obstacles that frequently prevent target detection are the presence of noise and backscatter from other objects, referred to as clutter. In this thesis, a method of static clutter suppression is proposed to increase detectability of moving targets in high clutter environments. An experimental dual-purpose, single-mode, monostatic FMCW radar, operating at 108 GHz, is used to map the range of stationary targets and determine range and velocity of moving targets. By transmitting a triangular waveform, which consists of alternating upchirps and downchirps, the received echo signals can be separated into two complementary data sets, an upchirp data set and a downchirp data set. In one data set, the return signals from moving targets are spectrally isolated (separated in frequency) from static clutter return signals. The static clutter signals in that first data set are then used to suppress the static clutter in the second data set, greatly improving detectability of moving targets. Once the moving target signals are recovered from each data set, they are then used to solve for target range and velocity simultaneously. The moving target of interest for tests performed was a reusable paintball (reball). Reball range and velocity were accurately measured at distances up to 5 meters and at speeds greater than 90 m/s (200 mph) with a deceleration of approximately 0.155 m/s/ms (meters per second per millisecond). Static clutter suppression of up to 25 dB was achieved, while moving target signals only suffered a loss of about 3 dB.Publication Fighting Fire with Format: Exploiting Autoantigen Delivery to Combat Autoimmunity(University of Kansas, 2019-12-31) Griffin, Jonathan Daniel; Berkland, Cory; DeKosky, Brandon; Dhar, Prajna; Mellott, AJ; Friis, LisaThere is a dire need for next-generation approaches to treating autoimmune disease that can potently inhibit the autoreactive destruction of host tissue while conserving protective immune functions. Antigen-specific immunotherapies (ASIT) offer such promise by harnessing the same pathogenic epitopes attacked in autoimmunity to selectively suppress the autoreactive cells that cause disease. Formatting autoantigen for ASIT is not trivial, as no clinical immunotherapies of this class are currently approved for treating autoimmune disease despite decades of attempts. This dissertation sought to explore physical and chemical determinants of efficacy in ASITs as a contribution toward fostering a future of precisely tailored autoimmune interventions. In these works, three autoantigen formats are explored: soluble, particulate, and surface delivery – each within the context of murine experimental autoimmune encephalomyelitis (EAE). In chapter 2, the soluble antigen array (SAgA) was adopted as a platform to investigate the role of antigen valency in evoking B cell anergy to promote tolerance among mixed splenocytes. Analysis of SAgAs presenting discrete autoantigen valencies revealed that low-valency (but not monovalent) autoantigen was most capable of inhibiting B cell calcium mobilization, and this inhibition predicted tolerogenic effects in a mixed population of splenocytes. In chapter 3, particulate autoantigen delivery was explored by formulating a “functional” delivery system consisting of an antioxidant vitamin E emulsion. This formulation proved capable to suppress EAE in vivo, but mechanistic analyses suggested a driver of effect that differed from the originally hypothesized antioxidant function. These results motivated the invention of the antigen-specific immune decoys (ASIDs) reported in chapters 4 and 5. ASIDs were comprised of autoantigen restricted onto the surface of microporous collagenous biomaterials. Peptide-epitope decorated constructs prevented EAE in vivo by intercepting and exhausting autoreactive cells. Chapter 5 was an extension of this work, where polyantigenic ASIDs were fabricated to present a comprehensive palette of autoantigens and account for the heterogeneity of authentic disease. Though capable of amplifying discrete antigen-specific cell subsets ex vivo, polyantigenic ASIDs apparently did not induce cellular exhaustion and failed to attenuate EAE as a result. Together, these works emphasize the importance of characteristics such as antigen valency and context. The ongoing exploration of the ASID platform provides a foundation for assessing the utility of engineered local microenvironments in immune-mediated disease.Publication EFFECT OF CRACK-REDUCING TECHNOLOGIES AND SUPPLEMENTARY CEMENTITIOUS MATERIALS ON SETTLEMENT CRACKING OF PLASTIC CONCRETE AND DURABILITY PERFORMANCE OF HARDENED CONCRETE(University of Kansas, 2019-12-31) Ibrahim, Eman Khalid; Darwin, Daved; Darwin, David; O'Reilly, Matt; Lepage, Andres; Lequesne, Remy; Barati, RezaABSTRACT The effects of crack-reducing technologies and supplementary cementitious materials on plastic settlement cracking and the durability of concrete subjected to freezing and thawing were evaluated. The study of settlement cracking included 86 concrete mixtures containing internal curing (IC), a shrinkage reducing admixture (SRA), optimized and non-optimized aggregate gradations, or the supplementary cementitious materials (SCMs) slag cement and silica fume. Some concrete mixtures contained combinations of these technologies, such as supplementary cementitious materials and internal curing. Both crack length and width were measured. The study of durability included 28 concrete mixtures, divided into three programs. Program 1 involved concrete containing different dosage rates of one of two shrinkage reducing admixtures. Program 2 involved concrete containing different volume replacements of Class F and Class C fly ash and different combinations of a rheology-modifying admixture (RMA) with and without Class C fly. Program 3 involved concrete containing different dosage rates of one of two shrinkage compensating admixtures, one based on MgO that also incorporated a shrinkage reducing admixture and one based on CaO. The study evaluated the effect of the technologies and materials on freeze-thaw durability, based on ASTM C666 Procedure B, scaling resistance, based on a modified version of Canadian Test BNQ NQ 2621-900 Annex B, and characteristics of the air-void system, obtained following ASTM C457. The research also examined the correlation between air-void characteristics, compressive strength, freeze-thaw durability, and scaling resistance for the mixtures. All mixtures experienced increased settlement cracking as slump increased; the increase, however, was very low for the concrete containing both slag cement and silica fume, with or without internal curing. All crack reducing technologies and supplementary cementitious materials tested resulted in a reduction in settlement cracking at all slumps compared to mixtures without these technologies and materials. The use of a non-optimized aggregate gradation increased settlement cracking compared to mixtures with an optimized gradation. The combination of slag cement and silica fume in concrete provided a greater reduction in settlement cracking than slag cement alone. In terms of durability, mixtures with an average air-void spacing factor of 0.007 in. (0.18 mm) or less performed well in the freeze-thaw test. Mixtures with an average air-void spacing factor of 0.007 in. (0.18 mm) or less and a compressive strength greater than 4000 psi (27.6 MPa) performed well in the scaling test. In terms of specific performance, one SRA had no effect on freeze-thaw durability, while the other caused reduced durability. Concrete with either SRA exhibited a reduction in scaling resistance. Mixtures containing Class F fly ash, RMA, or Class C fly ash in conjunction with RMA at all dosages studied performed well in the freeze-thaw test if the air-void spacing factor was 0.007 in. (0.18 mm) or less. Class F or Class C fly ash alone had no effect on scaling resistance when the concrete had an air-void spacing factor of 0.0071 in. (0.18 mm) or less. The RMA without and with Class C fly ash resulted in reduced scaling resistance. This reduction was in all cases associated with a concrete compressive strength below 4000 psi (27.6 MPa). An SCA based on CaO had no effect on the freeze-thaw durability at the dosage used in this study. The SCA based on MgO resulted in lower freeze-thaw durability, but only in mixtures that had increased air-void spacing; the increased air-void spacing may have been due to the shrinkage reducing admixture incorporated in the admixture, which can reduce the stability of the air-void system. With the exception of one mixture with high air-void spacing factor [0.0096 in. (0.24 mm)], the two SCAs had no effect on scaling resistance at all dosages used in this study. All mixtures exhibited a lower air content in the hardened concrete than in the plastic concrete. This reduction in air content was significantly greater for mixtures containing high dosages of SRAs or the RMA.Publication Modeling of Compression Ignition Engines for Advanced Engine Operation and Alternative Fuels by the Second Law of Thermodynamics(University of Kansas, 2019-12-31) Mattson, Jonathan Michael Stearns; Depcik, Christopher D; Depcik, Christopher D; Liu, Lin; Li, Xianglin; Peltier, Edward F; Stagg-Williams, Susan MWith the advent of modern engine control strategies, and particularly electronic common-rail injection, the scope and scale of what is achievable and controllable in compression-ignition engines has exploded quite rapidly in recent years. The potential marriage of electronically-controlled and multi-point fuel injection, dual fuel combustion, variable exhaust gas recirculation, exhaust waste heat recovery, low-temperature combustion, and the immense variety of potential liquid and gaseous fuels available means that the older understanding of compression ignition engine combustion is incomplete and inadequate to explain, predict, control, and optimize more novel engine combustion and operational regimes. This mandates that new models, both diagnostic and theoretical, be developed to explore engine combustion and pick apart the various phenomena that result, and includes revisiting models that previously have been sidelined for a lack of usefulness. To that end, this work details the construction, validation, and usage of a diagnostic heat release model focused on the application of the 2nd Law of Thermodynamics and the phenomena associated with entropy generation and availability destruction from the accumulated test data of numerous fuels and engine operational modes. A critical aspect of this research includes the marriage of this model with a suite of emissions analysis technologies, allowing for a complete characterization of engine-out regulated and unregulated emissions species, as well as a thoroughly instrumented and highly modified single-cylinder compression-ignition engine. This combined test apparatus for novel fuels and engine operational modes, in combination with the models described herein, serve as a means to collect and dissect engine performance, in-cylinder pressure, engine knock and noise, emissions, heat release, and availability release and consumption, and the interrelationships between these characteristics The experimental results of this work showcase both the direct usage of the 2nd Law Analysis (both alongside and separate from the more traditional 1st Law Heat Release Analysis), and also the potential usage of this model for the exploration of engine operational modes. In particular, the 2nd Law analysis appears to be of immense importance to the exploration of low temperature combustion regimes, as well as the usage of exhaust waste heat recovery systems.Publication Thermodynamic Consistency of the currently used Beam Mathematical Models and Thermodynamically Consistent New Formulations for Bending of Thermoelastic, and Thermoviscoelastic Beams(University of Kansas, 2019-12-31) Mysore Krishna, Dhaval; Surana, Karan; TenPas, Peter; Sorem, Robert; Taghavi, Ray; Darabi, MasoudIn order to enhance currently used beam mathematical models in R^2 and R^3 to include mechanisms of dissipation and memory, it is necessary to establish if the mathematical models for these theories can be derived using the conservation and the balance laws of continuum mechanics in conjunction with the corresponding kinematic assumptions. This is referred to as thermodynamic consistency of the beam mathematical models. Thermodynamic consistency of the currently used beam models will permit use of entropy inequality to establish constitutive theories in the presence of dissipation and memory mechanism for the currently used beam mathematical models. This is the main motivation for the work presented in this dissertation. The currently used beam mathematical models for homogeneous, isotropic matter and reversible deformation physics are derived based on kinematic assumptions related to the axial and transverse displacement fields. These are then used to derive strain measures followed by constitutive relations. For linear beam theories, strain measures are linear functions of displacement gradients and stresses are linear functions of strain measures. Using these stress and strain measures, energy functional is constructed over the volume of the beam consisting of kinetic energy, strain energy and potential energy of loads. The Euler's equation(s) extracted from the first variation of this energy functional set to zero yield the differential equations describing the evolution of the deforming beam. Alternatively, principle of virtual work can also be used to derive mathematical models for beams. For linear elastic behavior with small deformation and small strain these two approaches yield same mathematical models. The energy methods or the principle of virtual work cannot be used for irreversible process, thus precluding their use in the presence of dissipation and memory mechanisms. In this dissertation we examine whether the currently used beam mathematical models for reversible deformation physics and with the corresponding kinematic assumption (i) can be derived using the conservation and balance laws of classical continuum mechanics or (ii) are the conservation and balance laws of non-classical continuum mechanics necessary in their derivation. In order to ensure that the mathematical models for various beam theories result in deformation that is in thermodynamic equilibrium we must establish the consistency of the beam theories with regard to the conservation and the balance laws of continuum mechanics, classical or non-classical in conjunction with their corresponding kinematic assumptions. Currently used Euler-Bernoulli and Timoshenko beam mathematical models that are representative of most beam mathematical models are investigated. This is followed by details of general and higher order thermodynamically consistent beam mathematical models that is free of kinematic assumptions and other approximations and remains valid for slender as well as deep beams. Model problem studies are presented for slender as well as deep beams. The new formulation presented here ensures thermodynamic equilibrium as it is derived using the conservation and the balance laws of continuum mechanics and remains valid for slender as well as non-slender beams. The new formulation presented for thermoelastic reversible mechanical deformation is extended for thermoviscoelastic beams with dissipation and for thermoviscoelastic beams with dissipation and memory. In each case model problem studies are presented using currently used mathematical models (when possible) and the results are compared with those obtained using the new thermodynamically consistent formulation presented here.Publication Estimating the Effect of Connected and Autonomous vehicles (CAVs) on Capacity and Level of Service at Freeway Merge Segments(University of Kansas, 2019-12-31) Patel, Akshay Dinesh; Kondyli, Alexandra; Schrock, Steven D.; Mulinazzi, Thomas E.The aim of the study was to obtain Capacity adjustment factors and Break points which can be utilized for Highway Capacity Manual (HCM6) methodology in obtaining Level of Service for freeways when Connected and Autonomous Vehicles (CAVs) are present inside the traffic stream. Accordingly, various two-lane heterogeneous flow scenarios were modelled which included variations in free-flow speed and percent of heavy vehicles wherein the possible impact of the CAVs on the current traffic system was analyzed. Each scenario was first calibrated inside VISSIM to replicate the results from HCM6 and later CAVs were introduced in various proportions inside the traffic stream of conventional vehicles to access performance improvements using VISSIM. It was concluded that CAVs do improve system capacity and resulted in longer free-flow phase, which is a direct effect of the increased road capacity. Up to 25% CAV-penetration rate, the road capacity increased gradually and beyond 25%, the growth rate was largely decided by the improved capability of the CAVs compared to conventional vehicles. An improved capability corresponded to a higher capacity growth rate and a higher capacity. CAVs with higher penetration rates also resulted in longer free-flow phases but only a few of the scenarios saw a minor improvement in density, which was due to the assumptions and driving behavior parameters utilized to model driving behavior for different vehicle classes.Publication Power amplification via compliant layer interdigitation and dielectrophoretic structuring of PZT particle composites(University of Kansas, 2019-12-31) Pessia, Zachary Robert; Friis, Elizabeth A; Fischer, Kenneth J; Barrett, Ronald MNonunion occurs in up to 10% of all fractures, with about 8% of all femoral fractures ending in nonunion, or failed healing with current fixation methods1,2. These failure rates can be caused by factors such as diabetes, osteoporosis, tobacco use, and severe tissue damage3,4. According to the FDA, it takes a minimum of nine moths to declare nonunion after trauma, with no progress in healing for three months5,6. Some adjunct therapy methods are being used to combat these failure rates such as the OsteoGen¬TM direct current bone growth stimulator. However, these devices require an implanted battery that will eventually need removed. The evolution of portable electronics has led to recent popularity of piezoelectric materials for energy harvesting, especially for devices deployed remotely or in vivo. Intramedullary nails could utilize the energy harvesting capabilities of piezoelectric materials to provide electrical stimulation at the fracture site without an implanted battery. However, the efficiency of piezoelectric generators harvesting energy from the human body is lacking due to off-resonance loading7. In addition, piezoelectric ceramics are expensive to manufacture, dense, brittle, and difficult to use in high strain environments. Piezoelectric composites composed of ferroelectric particles distributed in a polymer matrix are desirable due to low cost and tunable properties. In this study, Compliant Layer Adaptive Composite Stacks (CLACS) made with thin piezoelectric composite layers structured by dielectrophoresis (DEP) were investigated to increase the energy harvesting efficiency at low frequencies. To predict power generation capabilities, a theoretical model was developed by using established particle composite models in conjunction with a shear lag structural mechanics model for CLACS. Granular composite discs of lead zirconate titanate particles in an epoxy matrix were manufactured at a 50% volume fraction and structured by DEP, if applicable. CLACS were manufactured using ten composite discs and two compliant layer thicknesses. The stacks were electromechanically tested by varying load, frequency, and resistance. Experimental results showed an increase in power amplification with DEP structured discs and compliant layers. In addition, the theoretical model accurately predicts power production for both 0-3 and 1-3 CLACS at low frequencies. DEP structured particle composite CLACS can provide a method of energy harvesting for devices in remote locations, especially in low frequency high strain environments. Future work could continue the development of piezoelectric particle composite CLACS for use in intramedullary nails. Such studies would evaluate the performance of ring shaped piezoelectric composites, develop theoretical understanding for ring shaped CLACS, investigate fatigue strength of piezoelectric particle composites, and evaluate impact strength of particle composite CLACS as compared to ceramic CLACS. Lastly, overall improvements to particle composite manufacturing methods to reduce variability could be investigated.