831 resultados para Reduced physical models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanism of wave-seabed interaction has been extensively studied by coastal geotechnical engineers in recent years. Numerous poro-elastic models have been proposed to investigate the mechanism of wave propagation on a seabed in the past. The existing poro-elastic models include drained model, consolidation model, Coulomb-damping model, and full dynamic model. However, to date, the difference between the existing models is unclear. In this paper, the fully dynamic poro-elastic model for the wave-seabed interaction will be derived first. Then, the existing models will be reduced from the proposed fully dynamic model. Based on the numerical comparisons, the applicable range of each model is also clarified for the engineering practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silicon carbide bulk crystals were grown in an induction-heating furnace using the physical vapor transport method. Crystal growth modeling was performed to obtain the required inert gas pressure and temperatures for sufficiently large growth rates. The SiC crystals were expanded by designing a growth chamber having a positive temperature gradient along the growth interface. The obtained 6H-SiC crystals were cut into wafers and characterized by Raman scattering spectroscopy and X-ray diffraction, and the results showed that most parts of the crystals had good crystallographic structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of nanotechnology has necessitated a better understanding of how material microstructure changes at the atomic level would affect the macroscopic properties that control the performance. Such a challenge has uncovered many phenomena that were not previously understood and taken for granted. Among them are the basic foundation of dislocation theories which are now known to be inadequate. Simplifying assumptions invoked at the macroscale may not be applicable at the micro- and/or nanoscale. There are implications of scaling hierrachy associated with in-homegeneity and nonequilibrium. of physical systems. What is taken to be homogeneous and equilibrium at the macroscale may not be so when the physical size of the material is reduced to microns. These fundamental issues cannot be dispensed at will for the sake of convenience because they could alter the outcome of predictions. Even more unsatisfying is the lack of consistency in modeling physical systems. This could translate to the inability for identifying the relevant manufacturing parameters and rendering the end product unpractical because of high cost. Advanced composite and ceramic materials are cases in point. Discussed are potential pitfalls for applying models at both the atomic and continuum levels. No encouragement is made to unravel the truth of nature. Let it be partiuclates, a smooth continuum or a combination of both. The present trend of development in scaling tends to seek for different characteristic lengths of material microstructures with or without the influence of time effects. Much will be learned from atomistic simulation models to show how results could differ as boundary conditions and scales are changed. Quantum mechanics, continuum and cosmological models provide evidence that no general approach is in sight. Of immediate interest is perhaps the establishment of greater precision in terminology so as to better communicate results involving multiscale physical events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current paper, we have primarily addressed one powerful simulation tool developed during the last decades-Large Eddy Simulation (LES), which is most suitable for unsteady three-dimensional complex turbulent flows in industry and natural environment. The main point in LES is that the large-scale motion is resolved while the small-scale motion is modeled or, in geophysical terminology, parameterized. With a view to devising a subgrid-scale(SGS) model of high quality, we have highlighted analyzing physical aspects in scale interaction and-energy transfer such as dissipation, backscatter, local and non-local interaction, anisotropy and resolution requirement. They are the factors responsible for where the advantages and disadvantages in existing SGS models come from. A case study on LES of turbulence in vegetative canopy is presented to illustrate that LES model is more based on physical arguments. Then, varieties of challenging complex turbulent flows in both industry and geophysical fields in the near future-are presented. In conclusion; we may say with confidence that new century shall see the flourish in the research of turbulence with the aid of LES combined with other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical fracture mechanics is based on the premise that small scale features could be averaged to give a larger scale property such that the assumption of material homogeneity would hold. Involvement of the material microstructure, however, necessitates different characteristic lengths for describing different geometric features. Macroscopic parameters could not be freely exchanged with those at the microscopic scale level. Such a practice could cause misinterpretation of test data. Ambiguities arising from the lack of a more precise range of limitations for the definitions of physical parameters are discussed in connection with material length scales. Physical events overlooked between the macroscopic and microscopic scale could be the link that is needed to bridge the gap. The classical models for the creation of free surface for a liquid and solid are oversimplified. They consider only the translational motion of individual atoms. Movements of groups or clusters of molecules deserve attention. Multiscale cracking behavior also requires the distinction of material damage involving at least two different scales in a single simulation. In this connection, special attention should be given to the use of asymptotic solution in contrast to the full field solution when applying fracture criteria. The former may leave out detail features that would have otherwise been included by the latter. Illustrations are provided for predicting the crack initiation sites of piezoceramics. No definite conclusions can be drawn from the atomistic simulation models such as those used in molecular dynamics until the non-equilibrium boundary conditions can be better understood. The specification of strain rates and temperatures should be synchronized as the specimen size is reduced to microns. Many of the results obtained at the atomic scale should be first identified with those at the mesoscale before they are assumed to be connected with macroscopic observations. Hopefully, "mesofracture mechanics" could serve as the link to bring macrofracture mechanics closer to microfracture mechanics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stress release model, a stochastic version of the elastic rebound theory, is applied to the large events from four synthetic earthquake catalogs generated by models with various levels of disorder in distribution of fault zone strength (Ben-Zion, 1996) They include models with uniform properties (U), a Parkfield-type asperity (A), fractal brittle properties (F), and multi-size-scale heterogeneities (M). The results show that the degree of regularity or predictability in the assumed fault properties, based on both the Akaike information criterion and simulations, follows the order U, F, A, and M, which is in good agreement with that obtained by pattern recognition techniques applied to the full set of synthetic data. Data simulated from the best fitting stress release models reproduce, both visually and in distributional terms, the main features of the original catalogs. The differences in character and the quality of prediction between the four cases are shown to be dependent on two main aspects: the parameter controlling the sensitivity to departures from the mean stress level and the frequency-magnitude distribution, which differs substantially between the four cases. In particular, it is shown that the predictability of the data is strongly affected by the form of frequency-magnitude distribution, being greatly reduced if a pure Gutenburg-Richter form is assumed to hold out to high magnitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methane hydrate, which is usually found under deep seabed or permafrost zones, is a potential energy resource for future years. Depressurization of horizontal wells bored in methane hydrate layer is considered as one possible method for hydrate dissociation and methane extraction from the hosting soil. Since hydrate is likely to behave as a bonding material to sandy soils, supported well construction is necessary to avoid well-collapse due to the loss of the apparent cohesion during depressurization. This paper describes both physical and numerical modeling of such horizontal support wells. The experimental part involves depressurization of small well models in a large pressure cell, while the numerical part simulates the corresponding problem. While the experiment models simulate only gas saturated initial conditions, the numerical analysis simulates both gas-saturated and more realistic water-saturated conditions based on effective stress coupled flow-deformation formulation of these three phases. © 2006 Taylor & Francis Group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unlike most previous studies on the transverse vortex-induced vibration(VIV) of a cylinder mainly under the wallfree condition (Williamson & Govardhan,2004),this paper experimentally investigates the vortex-induced vibration of a cylinder with two degrees of freedom near a rigid wall exposed to steady flow.The amplitude and frequency responses of the cylinder are discussed.The lee wake flow patterns of the cylinder undergoing VIV were visualized by employing the hydrogen bubble technique.The effects of the gap-to-diameter ratio (e0/D) and the mass ratio on the vibration amplitude and frequency are analyzed.Comparisons of VIV response of the cylinder are made between one degree (only transverse) and two degrees of freedom (streamwise and transverse) and those between the present study and previous ones.The experimental observation indicates that there are two types of streamwise vibration,i.e.the first streamwise vibration (FSV) with small amplitude and the second streamwise vibration (SSV) which coexists with transverse vibration.The vortex shedding pattem for the FSV is approximately symmetric and that for the SSV is alternate.The first streamwise vibration tends to disappear with the decrease of e0/D.For the case of large gap-to-diameter ratios (e.g.e0/D = 0.54~1.58),the maximum amplitudes of the second streamwise vibration and transverse one increase with the increasing gapto-diameter ratio.But for the case of small gap-to-diameter ratios (e.g.e0/D = 0.16,0.23),the vibration amplitude of the cylinder increases slowly at the initial stage (i.e.at small reduced velocity V,),and across the maximum amplitude it decreases quickly at the last stage (i.e.at large Vr).Within the range ofthe examined small mass ratio (m<4),both streamwise and transverse vibration amplitude of the cylinder decrease with the increase of mass ratio for the fixed value of V,.The vibration range (in terms of Vr ) tends to widen with the decrease of the mass ratio.In the second streamwise vibration region,the vibration frequency of the cylinder with a small mass ratio (e.g.mx = 1.44) undergoes a jump at a certain Vr,.The maximum amplitudes of the transverse vibration for two-degree-of-freedom case is larger than that for one-degree-of-freedom case,but the transverse vibration frequency of the cylinder with two degrees of freedom is lower than that with one degree of freedom (transverse).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A summary of previous research is presented that indicates that the purpose of a blue copper protein's fold and hydrogen bond network, aka, the rack effect, enforce a copper(II) geometry around the copper(I) ion in the metal site. In several blue copper proteins, the C-terminal histidine ligand becomes protonated and detaches from the copper in the reduced forms. Mutants of amicyanin from Paracoccus denitrificans were made to alter the hydrogen bond network and quantify the rack effect by pKa shifts.

The pKa's of mutant amicyanins have been measured by pH-dependent electrochemistry. P94F and P94A mutations loosen the Northern loop, allowing the reduced copper to adopt a relaxed conformation: the ability to relax drives the reduction potentials up. The measured potentials are 265 (wild type), 380 (P94A), and 415 (P94F) mV vs. NHE. The measured pKa's are 7.0 (wild type), 6.3 (P94A), and 5.0 (P94F). The additional hydrogen bond to the thiolate in the mutants is indicated by a red-shift in the blue copper absorption and an increase in the parallel hyperfine splitting in the EPR spectrum. This hydrogen bond is invoked as the cause for the increased stability of the C-terminal imidazole.

Melting curves give a measure of the thermal stability of the protein. A thermodynamic intermediate with pH-dependent reversibility is revealed. Comparisons with the electrochemistry and apoamicyanin suggest that the intermediate involves the region of the protein near the metal site. This region is destabilized in the P94F mutant; coupled with the evidence that the imidazole is stabilized under the same conditions confirms an original concept of the rack effect: a high energy configuration is stabilized at a cost to the rest of the protein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation describes efforts to model biological active sites with small molecule clusters. The approach used took advantage of a multinucleating ligand to control the structure and nuclearity of the product complexes, allowing the study of many different homo- and heterometallic clusters. Chapter 2 describes the synthesis of the multinucleating hexapyridyl trialkoxy ligand used throughout this thesis and the synthesis of trinuclear first row transition metal complexes supported by this framework, with an emphasis on tricopper systems as models of biological multicopper oxidases. The magnetic susceptibility of these complexes were studied, and a linear relation was found between the Cu-O(alkoxide)-Cu angles and the antiferromagnetic coupling between copper centers. The triiron(II) and trizinc(II) complexes of the ligand were also isolated and structurally characterized.

Chapter 3 describes the synthesis of a series of heterometallic tetranuclear manganese dioxido complexes with various incorporated apical redox-inactive metal cations (M = Na+, Ca2+, Sr2+, Zn2+, Y3+). Chapter 4 presents the synthesis of heterometallic trimanganese(IV) tetraoxido complexes structurally related to the CaMn3 subsite of the oxygen-evolving complex (OEC) of Photosystem II. The reduction potentials of these complexes were studied, and it was found that each isostructural series displays a linear correlation between the reduction potentials and the Lewis acidities of the incorporated redox-inactive metals. The slopes of the plotted lines for both the dioxido and tetraoxido clusters are the same, suggesting a more general relationship between the electrochemical potentials of heterometallic manganese oxido clusters and their “spectator” cations. Additionally, these studies suggest that Ca2+ plays a role in modulating the redox potential of the OEC for water oxidation.

Chapter 5 presents studies of the effects of the redox-inactive metals on the reactivities of the heterometallic manganese complexes discussed in Chapters 3 and 4. Oxygen atom transfer from the clusters to phosphines is studied; although the reactivity is kinetically controlled in the tetraoxido clusters, the dioxido clusters with more Lewis acidic metal ions (Y3+ vs. Ca2+) appear to be more reactive. Investigations of hydrogen atom transfer and electron transfer rates are also discussed.

Appendix A describes the synthesis, and metallation reactions of a new dinucleating bis(N-heterocyclic carbene)ligand framework. Dicopper(I) and dicobalt(II) complexes of this ligand were prepared and structurally characterized. A dinickel(I) dichloride complex was synthesized, reduced, and found to activate carbon dioxide. Appendix B describes preliminary efforts to desymmetrize the manganese oxido clusters via functionalization of the basal multinucleating ligand used in the preceding sections of this dissertation. Finally, Appendix C presents some partially characterized side products and unexpected structures that were isolated throughout the course of these studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.