907 resultados para Strut-and Tie Model
Resumo:
In this research, micro and nanoparticles of Spirulina platensis dead biomass were obtained, characterized and employed to removal FD&C red no. 40 and acid blue 9 synthetic dyes from aqueous solutions. The effects of particle size (micro and nano) and biosorbent dosage (from 50 to 750 mg) were studied. Pseudofirst order, pseudo-second order and Elovich models were used to evaluate the biosorption kinetics. The biosorption nature was verified using energy dispersive X-ray spectroscopy (EDS). The best results for both dyes were found using 250 mg of nanoparticles, in these conditions, the biosorption capacities were 295 mg g−1 and 1450 mg g−1, and the percentages of dye removal were 15.0 and 72.5% for the FD&C red no. 40 and acid blue 9, respectively. Pseudo-first order model was the more adequate to represent the biosorption of both dyes onto microparticles, and Elovich model was more appropriate to the biosorption onto nanoparticles. The EDS results suggested that the dyes biosorption onto microparticles occurred mainly by physical interactions, and for the nanoparticles, chemisorption was dominant.
Resumo:
Most major cities in the eastern United States have air quality deemed unhealthy by the EPA under a set of regulations known as the National Ambient Air Quality Standards (NAAQS). The worst air quality in Maryland is measured in Edgewood, MD, a small community located along the Chesapeake Bay and generally downwind of Baltimore during hot, summertime days. Direct measurements and numerical simulations were used to investigate how meteorology and chemistry conspire to create adverse levels of photochemical smog especially at this coastal location. Ozone (O3) and oxidized reactive nitrogen (NOy), a family of ozone precursors, were measured over the Chesapeake Bay during a ten day experiment in July 2011 to better understand the formation of ozone over the Bay and its impact on coastal communities such as Edgewood. Ozone over the Bay during the afternoon was 10% to 20% higher than the closest upwind ground sites. A combination of complex boundary layer dynamics, deposition rates, and unaccounted marine emissions play an integral role in the regional maximum of ozone over the Bay. The CAMx regional air quality model was assessed and enhanced through comparison with data from NASA’s 2011 DISCOVER-AQ field campaign. Comparisons show a model overestimate of NOy by +86.2% and a model underestimate of formaldehyde (HCHO) by –28.3%. I present a revised model framework that better captures these observations and the response of ozone to reductions of precursor emissions. Incremental controls on electricity generating stations will produce greater benefits for surface ozone while additional controls on mobile sources may yield less benefit because cars emit less pollution than expected. Model results also indicate that as ozone concentrations improve with decreasing anthropogenic emissions, the photochemical lifetime of tropospheric ozone increases. The lifetime of ozone lengthens because the two primary gas-phase sinks for odd oxygen (Ox ≈ NO2 + O3) – attack by hydroperoxyl radicals (HO2) on ozone and formation of nitrate – weaken with decreasing pollutant emissions. This unintended consequence of air quality regulation causes pollutants to persist longer in the atmosphere, and indicates that pollutant transport between states and countries will likely play a greater role in the future.
Resumo:
Mesoscale Gravity Waves (MGWs) are large pressure perturbations that form in the presence of a stable layer at the surface either behind Mesoscale Convective Systems (MCSs) in summer or over warm frontal surfaces behind elevated convection in winter. MGWs are associated with damaging winds, moderate to heavy precipitation, and occasional heat bursts at the surface. The forcing mechanism for MGWs in this study is hypothesized to be evaporative cooling occurring behind a convective line. This evaporatively-cooled air generates a downdraft that then depresses the surface-based stable layer and causes pressure decreases, strong wind speeds and MGW genesis. Using the Weather Research and Forecast Model (WRF) version 3.0, evaporative cooling is simulated using an imposed cold thermal. Sensitivity studies examine the response of MGW structure to different thermal and shear profiles where the strength and depth of the inversion are varied, as well as the amount of wind shear. MGWs are characterized in terms of response variables, such as wind speed perturbations (U'), temperature perturbations (T'), pressure perturbations (P'), potential temperature perturbations (Θ'), and the correlation coefficient (R) between U' and P'. Regime Diagrams portray the response of MGW to the above variables in order to better understand the formation, causes, and intensity of MGWs. The results of this study indicate that shallow, weak surface layers coupled with deep, neutral layers above favor the formation of waves of elevation. Conversely, deep strong surface layers coupled with deep, neutral layers above favor the formation of waves of depression. This is also the type of atmospheric setup that tends to produce substantial surface heating at the surface.
Resumo:
As the formative agents of cloud droplets, aerosols play an undeniably important role in the development of clouds and precipitation. Few meteorological models have been developed or adapted to simulate aerosols and their contribution to cloud and precipitation processes. The Weather Research and Forecasting model (WRF) has recently been coupled with an atmospheric chemistry suite and is jointly referred to as WRF-Chem, allowing atmospheric chemistry and meteorology to influence each other’s evolution within a mesoscale modeling framework. Provided that the model physics are robust, this framework allows the feedbacks between aerosol chemistry, cloud physics, and dynamics to be investigated. This study focuses on the effects of aerosols on meteorology, specifically, the interaction of aerosol chemical species with microphysical processes represented within the framework of the WRF-Chem. Aerosols are represented by eight size bins using the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional parameterization, which is linked to the Purdue Lin bulk microphysics scheme. The aim of this study is to examine the sensitivity of deep convective precipitation modeled by the 2D WRF-Chem to varying aerosol number concentration and aerosol type. A systematic study has been performed regarding the effects of aerosols on parameters such as total precipitation, updraft/downdraft speed, distribution of hydrometeor species, and organizational features, within idealized maritime and continental thermodynamic environments. Initial results were obtained using WRFv3.0.1, and a second series of tests were run using WRFv3.2 after several changes to the activation, autoconversion, and Lin et al. microphysics schemes added by the WRF community, as well as the implementation of prescribed vertical levels by the author. The results of WRFv3.2 runs contrasted starkly with WRFv3.0.1 runs. The WRFv3.0.1 runs produced a propagating system resembling a developing squall line, whereas the WRFv3.2 runs did not. The response of total precipitation, updraft/downdraft speeds, and system organization to increasing aerosol concentrations were opposite between runs with different versions of WRF. Results of the WRFv3.2 runs, however, were in better agreement in timing and magnitude of vertical velocity and hydrometeor content with a WRFv3.0.1 run using single-moment Lin et al. microphysics, than WRFv3.0.1 runs with chemistry. One result consistent throughout all simulations was an inhibition in warm-rain processes due to enhanced aerosol concentrations, which resulted in a delay of precipitation onset that ranged from 2-3 minutes in WRFv3.2 runs, and up to 15 minutes in WRFv.3.0.1 runs. This result was not observed in a previous study by Ntelekos et al. (2009) using the WRF-Chem, perhaps due to their use of coarser horizontal and vertical resolution within their experiment. The changes to microphysical processes such as activation and autoconversion from WRFv3.0.1 to WRFv3.2, along with changes in the packing of vertical levels, had more impact than the varying aerosol concentrations even though the range of aerosol tested was greater than that observed in field studies. In order to take full advantage of the input of aerosols now offered by the chemistry module in WRF, the author recommends that a fully double-moment microphysics scheme be linked, rather than the limited double-moment Lin et al. scheme that currently exists. With this modification, the WRF-Chem will be a powerful tool for studying aerosol-cloud interactions and allow comparison of results with other studies using more modern and complex microphysical parameterizations.
Resumo:
Age is the highest risk factor for some of the most prevalent human diseases, including cancer. Telomere shortening is thought to play a central role in the aging process in humans. The link between telomeres and aging is highlighted by the fact that genetic diseases causing telomerase deficiency are associated with premature aging and increased risk of cancer. For the last two decades, this link has been mostly investigated using mice that have long telomeres. However, zebrafish has recently emerged as a powerful and complementary model system to study telomere biology. Zebrafish possess human-like short telomeres that progressively decline with age, reaching lengths in old age that are observed when telomerase is mutated. The extensive characterization of its well-conserved molecular and cellular physiology makes this vertebrate an excellent model to unravel the underlying relationship between telomere shortening, tissue regeneration, aging and disease. In this Review, we explore the advantages of using zebrafish in telomere research and discuss the primary discoveries made in this model that have contributed to expanding our knowledge of how telomere attrition contributes to cellular senescence, organ dysfunction and disease.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
I investigate the effects of information frictions in price setting decisions. I show that firms' output prices and wages are less sensitive to aggregate economic conditions when firms and workers cannot perfectly understand (or know) the aggregate state of the economy. Prices and wages respond with a lag to aggregate innovations because agents learn slowly about those changes, and this delayed adjustment in prices makes output and unemployment more sensitive to aggregate shocks. In the first chapter of this dissertation, I show that workers' noisy information about the state of the economy help us to explain why real wages are sluggish. In the context of a search and matching model, wages do not immediately respond to a positive aggregate shock because workers do not (yet) have enough information to demand higher wages. This increases firms' incentives to post more vacancies, and it makes unemployment volatile and sensitive to aggregate shocks. This mechanism is robust to two major criticisms of existing theories of sluggish wages and volatile unemployment: the flexibility of wages for new hires and the cyclicality of the opportunity cost of employment. Calibrated to U.S. data, the model explains 60% of the overall unemployment volatility. Consistent with empirical evidence, the response of unemployment to TFP shocks predicted by my model is large, hump-shaped, and peaks one year after the TFP shock, while the response of the aggregate wage is weak and delayed, peaking after two years. In the second chapter of this dissertation, I study the role of information frictions and inventories in firms' price setting decisions in the context of a monetary model. In this model, intermediate goods firms accumulate output inventories, observe aggregate variables with one period lag, and observe their nominal input prices and demand at all times. Firms face idiosyncratic shocks and cannot perfectly infer the state of nature. After a contractionary nominal shock, nominal input prices go down, and firms accumulate inventories because they perceive some positive probability that the nominal price decline is due to a good productivity shock. This prevents firms' prices from decreasing and makes current profits, households' income, and aggregate demand go down. According to my model simulations, a 1% decrease in the money growth rate causes output to decline 0.17% in the first quarter and 0.38% in the second followed by a slow recovery to the steady state. Contractionary nominal shocks also have significant effects on total investment, which remains 1% below the steady state for the first 6 quarters.
Resumo:
The use of the ‘commission-accession’ principle as a mechanism for sustainable collecting in public museums and galleries has been significantly under-researched, only recently soliciting attention from national funding bodies in the United Kingdom (UK). This research has assessed an unfolding situation and provided a body of current evaluative evidence for commission-based acquisitions and a model for curators to use in future contemporary art purchases. ‘Commission-accession’ is a practice increasingly used by European and American museums yet has seen little uptake in the UK. Very recent examples demonstrate that new works produced via commissioning which then enter permanent collections, have significant financial and audience benefits that UK museums could harness, by drawing on the expertise of local and national commissioning organisations. Very little evaluative information is available on inter-institutional precedents in the United States (US) or ‘achat par commande’ in France. Neither is there yet literature that investigates the ambition for and viability of such models in the UK. This thesis addresses both of these areas, and provides evaluative case studies that will be of particular value to curators who seek sustainable ways to build their contemporary art collections. It draws on a survey of 82 museums and galleries across the UK conducted for this research, which provide a picture of where and how ‘commission-accession’ has been applied, and demonstrates its impacts as a strategy. In addition interviews with artists and curators in the UK, US and France on the social, economic and cultural implications of ‘commission-accession’ processes were undertaken. These have shed new light on issues inherent to the commissioning of contemporary art such as communication, trust, and risk as well as drawing attention to the benefits and challenges involved in commissioning as of yet unmade works of art.
Resumo:
This study investigated the role of fatalism as a cultural value orientation and causal attributions for past failure in the academic performance of high school students in the Araucania Region of Chile. Three thousand three hundred and fourty eight Mapuche and Non-Mapuche students participated in the study. Consistent with the Culture and Behavior model that guided the research, the test of causal models based on the analysis of structural equations show that academic performance is in part a function of variations in the level of fatalism, directly as well as indirectly through its influence in the attribution processes and failure-related emotions. In general, the model representing the proposed structure of relations among fatalism, attributions, and emotions as determinants of academic performance fit the data for both Mapuche and non-Mapuche students. However, results show that some of the relations in the model are different for students from these two ethnic groups. Finally, according to the results from the analysis of causal models, family SES appear to be the most important determinant of fatalism.
Resumo:
A method for systematically tracking swells across oceanic basins is developed by taking advantage of high-quality data from space-borne altimeters and wave model output. The evolution of swells is observed over large distances based on 202 swell events with periods ranging from 12 to 18 s. An empirical attenuation rate of swell energy of about 4 × 10−7 m−1 is estimated using these observations, and the nonbreaking energy dissipation rates of swells far away from their generating areas are also estimated using a point source model. The resulting acceptance range of nonbreaking dissipation rates is −2.5 to 5.0 × 10−7 m−1, which corresponds to a dissipation e-folding scales of at least 2000 km for steep swells, to almost infinite for small-amplitude swells. These resulting rates are consistent with previous studies using in-situ and synthetic aperture radar (SAR) observations. The frequency dispersion and angular spreading effects during swell propagation are discussed by comparing the results with other studies, demonstrating that they are the two dominant processes for swell height attenuation, especially in the near field. The resulting dissipation rates from these observations can be used as a reference for ocean engineering and wave modeling, and for related studies such as air-sea and wind-wave-turbulence interactions.
Resumo:
Liquid-solid interactions become important as dimensions approach mciro/nano-scale. This dissertation focuses on liquid-solid interactions in two distinct applications: capillary driven self-assembly of thin foils into 3D structures, and droplet wetting of hydrophobic micropatterned surfaces. The phenomenon of self-assembly of complex structures is common in biological systems. Examples include self-assembly of proteins into macromolecular structures and self-assembly of lipid bilayer membranes. The principles governing this phenomenon have been applied to induce self-assembly of millimeter scale Si thin films into spherical and other 3D structures, which are then integrated into light-trapping photovoltaic (PV) devices. Motivated by this application, we present a generalized analytical study of the self-folding of thin plates into deterministic 3D shapes, through fluid-solid interactions, to be used as PV devices. This study consists of developing a model using beam theory, which incorporates the two competing components — a capillary force that promotes folding and the bending rigidity of the foil that resists folding into a 3D structure. Through an equivalence argument of thin foils of different geometry, an effective folding parameter, which uniquely characterizes the driving force for folding, has been identified. A criterion for spontaneous folding of an arbitrarily shaped 2D foil, based on the effective folding parameter, is thus established. Measurements from experiments using different materials and predictions from the model match well, validating the assumptions used in the analysis. As an alternative to the mechanics model approach, the minimization of the total free energy is employed to investigate the interactions between a fluid droplet and a flexible thin film. A 2D energy functional is proposed, comprising the surface energy of the fluid, bending energy of the thin film and gravitational energy of the fluid. Through simulations with Surface Evolver, the shapes of the droplet and the thin film at equilibrium are obtained. A critical thin film length necessary for complete enclosure of the fluid droplet, and hence successful self-assembly into a PV device, is determined and compared with the experimental results and mechanics model predictions. The results from the modeling and energy approaches and the experiments are all consistent. Superhydrophobic surfaces, which have unique properties including self-cleaning and water repelling are desired in many applications. One excellent example in nature is the lotus leaf. To fabricate these surfaces, well designed micro/nano- surface structures are often employed. In this research, we fabricate superhydrophobic micropatterned Polydimethylsiloxane (PDMS) surfaces composed of micropillars of various sizes and arrangements by means of soft lithography. Both anisotropic surfaces, consisting of parallel grooves and cylindrical pillars in rectangular lattices, and isotropic surfaces, consisting of cylindrical pillars in square and hexagonal lattices, are considered. A novel technique is proposed to image the contact line (CL) of the droplet on the hydrophobic surface. This technique provides a new approach to distinguish between partial and complete wetting. The contact area between droplet and microtextured surface is then measured for a droplet in the Cassie state, which is a state of partial wetting. The results show that although the droplet is in the Cassie state, the contact area does not necessarily follow Cassie model predictions. Moreover, the CL is not circular, and is affected by the micropatterns, in both isotropic and anisotropic cases. Thus, it is suggested that along with the contact angle — the typical parameter reported in literature quantifying wetting, the size and shape of the contact area should also be presented. This technique is employed to investigate the evolution of the CL on a hydrophobic micropatterned surface in the cases of: a single droplet impacting the micropatterned surface, two droplets coalescing on micropillars, and a receding droplet resting on the micropatterned surface. Another parameter which quantifies hydrophobicity is the contact angle hysteresis (CAH), which indicates the resistance of the surface to the sliding of a droplet with a given volume. The conventional methods of using advancing and receding angles or tilting stage to measure the resistance of the micropatterned surface are indirect, without mentioning the inaccuracy due to the discrete and stepwise motion of the CL on micropillars. A micronewton force sensor is utilized to directly measure the resisting force by dragging a droplet on a microtextured surface. Together with the proposed imaging technique, the evolution of the CL during sliding is also explored. It is found that, at the onset of sliding, the CL behaves as a linear elastic solid with a constant stiffness. Afterwards, the force first increases and then decreases and reaches a steady state, accompanied with periodic oscillations due to regular pinning and depinning of the CL. Both the maximum and steady state forces are primarily dependent on area fractions of the micropatterned surfaces in our experiment. The resisting force is found to be proportional to the number of pillars which pin the CL at the trailing edge, validating the assumption that the resistance mainly arises from the CL pinning at the trailing edge. In each pinning-and-depinning cycle during the steady state, the CL also shows linear elastic behavior but with a lower stiffness. The force variation and energy dissipation involved can also be determined. This novel method of measuring the resistance of the micropatterned surface elucidates the dependence on CL pinning and provides more insight into the mechanisms of CAH.
Resumo:
Abstract To what extent has citizenship been transformed under the New Labour government to include women as equal citizens? This chapter will examine New Labour’s record in terms of alternative conceptions of citizenship: a model based on equal obligations to paid work, a model based on recognising care and gender difference, and a model of universal citizenship, underpinning equal expectations of care work and paid work with rights to the resources needed for individuals to combine both. It will argue that, while New Labour has signed up to the EU resolution on work-life balance, which includes commitment to a ‘new social contract on gender’, and has significantly increased resources for care, obligations to work are at the heart of New Labour ideas of citizenship, with work conceived as paid employment: policies in practice have done more to bring women into employment than men into care. Women’s citizenship is still undermined – though less than under earlier governments - by these unequal obligations and their consequences in social rights.
Resumo:
Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network.
Resumo:
Planar cell polarity (PCP) occurs in the epithelia of many animals and can lead to the alignment of hairs, bristles and feathers; physiologically, it can organise ciliary beating. Here we present two approaches to modelling this phenomenon. The aim is to discover the basic mechanisms that drive PCP, while keeping the models mathematically tractable. We present a feedback and diffusion model, in which adjacent cell sides of neighbouring cells are coupled by a negative feedback loop and diffusion acts within the cell. This approach can give rise to polarity, but also to period two patterns. Polarisation arises via an instability provided a sufficiently strong feedback and sufficiently weak diffusion. Moreover, we discuss a conservative model in which proteins within a cell are redistributed depending on the amount of proteins in the neighbouring cells, coupled with intracellular diffusion. In this case polarity can arise from weakly polarised initial conditions or via a wave provided the diffusion is weak enough. Both models can overcome small anomalies in the initial conditions. Furthermore, the range of the effects of groups of cells with different properties than the surrounding cells depends on the strength of the initial global cue and the intracellular diffusion.
Resumo:
The fruit is one of the most complex and important structures produced by flowering plants, and understanding the development and maturation process of fruits in different angiosperm species with diverse fruit structures is of immense interest. In the work presented here, molecular genetics and genomic analysis are used to explore the processes that form the fruit in two species: The model organism Arabidopsis and the diploid strawberry Fragaria vesca. One important basic question concerns the molecular genetic basis of fruit patterning. A long-standing model of Arabidopsis fruit (the gynoecium) patterning holds that auxin produced at the apex diffuses downward, forming a gradient that provides apical-basal positional information to specify different tissue types along the gynoecium’s length. The proposed gradient, however, has never been observed and the model appears inconsistent with a number of observations. I present a new, alternative model, wherein auxin acts to establish the adaxial-abaxial domains of the carpel primordia, which then ensures proper development of the final gynoecium. A second project utilizes genomics to identify genes that regulate fruit color by analyzing the genome sequences of Fragaria vesca, a species of wild strawberry. Shared and distinct SNPs among three F. vesca accessions were identified, providing a foundation for locating candidate mutations underlying phenotypic variations among different F. vesca accessions. Through systematic analysis of relevant SNP variants, a candidate SNP in FveMYB10 was identified that may underlie the fruit color in the yellow-fruited accessions, which was subsequently confirmed by functional assays. Our lab has previously generated extensive RNA-sequencing data that depict genome-scale gene expression profiles in F. vesca fruit and flower tissues at different developmental stages. To enhance the accessibility of this dataset, the web-based eFP software was adapted for this dataset, allowing visualization of gene expression in any tissues by user-initiated queries. Together, this thesis work proposes a well-supported new model of fruit patterning in Arabidopsis and provides further resources for F. vesca, including genome-wide variant lists and the ability to visualize gene expression. This work will facilitate future work linking traits of economic importance to specific genes and gaining novel insights into fruit patterning and development.