972 resultados para Quasi-1D confinement
Resumo:
In this work it was performed a study to obtain parameters for an 1D regional velocity model for the Borborema Province, NE Brazil. It was used earthquakes occurred between 2001 and 2013 with magnitude greater than 2.9 mb either from epicentres determined from local seismic networks or by back azimuth determination, when possible. We chose seven events which occurred in the main seismic areas in the Borborema Province. The selected events were recorded in up to 74 seismic stations from the following networks: RSISNE, INCT-ET, João Câmara – RN, São Rafael – RN, Caruaru - PE, São Caetano - PE, Castanhão - CE, Santana do Acarau - CE, Taipu – RN e Sobral – CE, and the RCBR (IRIS/USGS—GSN). For the determination of the model parameters were inverted via a travel-time table and its fit. These model parameters were compared with other known model (global and regional) and have improved the epicentral determination. This final set of parameters model, we called MBB is laterally homogeneous with an upper crust at 11,45 km depth and total crustal thickness of 33,9 km. The P-wave velocity in the upper crust was estimated at 6.0 km/s and 6.64 km/s for it lower part. The P-wave velocity in the upper mantle we estimated at 8.21 km/s with an VP/VS ratio of approximately 1.74.
Resumo:
Ships and offshore structures, that encounter ice floes, tend to experience loads with varying pressure distributions within the contact patch. The effect of the surrounding ice adjacent to that which is involved in the contact zone has an influence on the effective strength. This effect has come to be called confinement. A methodology for quantifying ice sample confinement is developed, and the confinement is defined using two non-dimensional terms; a ratio of geometries and an angle. Together these terms are used to modify force predictions that account for increased fracturing and spalling at lower confinement levels. Data developed through laboratory experimentation is studied using dimensional analysis. The characteristics of dimensional analysis allow for easy comparison between many different load cases; provided the impact scenario is consistent. In all, a methodology is developed for analyzing ice impact testing considering confinement effects on force levels, with the potential for extrapolating these tests to full size collision events.
Resumo:
L'elaborato fornisce una introduzione alla funzione di Wigner, ovvero una funzione di fase che gioca un ruolo chiave in alcuni ambiti della fisica come l'ottica quantistica. Nel primo capitolo viene sviluppato sommariamente l'apparato matematico-fisico della quantizzazione di Weyl e quindi introdotta l'omonima mappa di quantizzazione tra funzioni di fase ed operatori quantistici. Nella seconda parte si delinea la nozione di distribuzione di quasi-probabilit\`a e si danno alcune importanti esemplificazioni della funzione di Wigner per gli autostati dell'oscillatore armonico. Per finire l'ultimo capitolo tratteggia il panorama sperimentale all'interno del quale la funzione di Wigner viene utilizzata.
Resumo:
In this thesis, a numerical design approach has been proposed and developed based on the transmission matrix method in order to characterize periodic and quasi-periodic photonic structures in silicon-on-insulator. The approach and its performance have been extensively tested with specific structures in 2D and its validity has been verified in 3D.
Resumo:
Peer reviewed
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Germanium (Ge) nanowires are of current research interest for high speed nanoelectronic devices due to the lower band gap and high carrier mobility compatible with high K-dielectrics and larger excitonic Bohr radius ensuing a more pronounced quantum confinement effect [1-6]. A general way for the growth of Ge nanowires is to use liquid or a solid growth promoters in a bottom-up approach which allow control of the aspect ratio, diameter, and structure of 1D crystals via external parameters, such as precursor feedstock, temperature, operating pressure, precursor flow rate etc [3, 7-11]. The Solid-phase seeding is preferred for more control processing of the nanomaterials and potential suppression of the unintentional incorporation of high dopant concentrations in semiconductor nanowires and unrequired compositional tailing of the seed-nanowire interface [2, 5, 9, 12]. There are therefore distinct features of the solid phase seeding mechanism that potentially offer opportunities for the controlled processing of nanomaterials with new physical properties. A superior control over the growth kinetics of nanowires could be achieved by controlling the inherent growth constraints instead of external parameters which always account for instrumental inaccuracy. The high dopant concentrations in semiconductor nanowires can result from unintentional incorporation of atoms from the metal seed material, as described for the Al catalyzed VLS growth of Si nanowires [13] which can in turn be depressed by solid-phase seeding. In addition, the creation of very sharp interfaces between group IV semiconductor segments has been achieved by solid seeds [14], whereas the traditionally used liquid Au particles often leads to compositional tailing of the interface [15] . Korgel et al. also described the superior size retention of metal seeds in a SFSS nanowire growth process, when compared to a SFLS process using Au colloids [12]. Here in this work we have used silver and alloy seed particle with different compositions to manipulate the growth of nanowires in sub-eutectic regime. The solid seeding approach also gives an opportunity to influence the crystallinity of the nanowires independent of the substrate. Taking advantage of the readily formation of stacking faults in metal nanoparticles, lamellar twins in nanowires could be formed.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Past glacials can be thought of as natural experiments in which variations in boundary conditions influenced the character of climate change. However, beyond the last glacial, an integrated view of orbital- and millennial-scale changes and their relation to the record of glaciation has been lacking. Here, we present a detailed record of variations in the land-ocean system from the Portuguese margin during the penultimate glacial and place it within the framework of ice-volume changes, with particular reference to European ice-sheet dynamics. The interaction of orbital- and millennial-scale variability divides the glacial into an early part with warmer and wetter overall conditions and prominent climate oscillations, a transitional mid-part, and a late part with more subdued changes as the system entered a maximum glacial state. The most extreme event occurred in the mid-part and was associated with melting of the extensive European ice sheet and maximum discharge from the Fleuve Manche river. This led to disruption of the meridional overturning circulation, but not a major activation of the bipolar seesaw. In addition to stadial duration, magnitude of freshwater forcing, and background climate, the evidence also points to the influence of the location of freshwater discharges on the extent of interhemispheric heat transport.
Resumo:
Let A be a unital dense algebra of linear mappings on a complex vector space X. Let φ = Σn i=1 Mai,bi be a locally quasi-nilpotent elementary operator of length n on A. We show that, if {a1, . . . , an} is locally linearly independent, then the local dimension of V (φ) = span{biaj : 1 ≤ i, j ≤ n} is at most n(n−1) 2 . If ldim V (φ) = n(n−1) 2 , then there exists a representation of φ as φ = Σn i=1 Mui,vi with viuj = 0 for i ≥ j. Moreover, we give a complete characterization of locally quasinilpotent elementary operators of length 3.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
Objectives: To determine if providing informal care to a co-resident with dementia symptoms places an additional risk on the likelihood of poor mental health or mortality compared to co-resident non-caregivers.
Design: A quasi-experimental design of caregiving and non-caregiving co-residents of individuals with dementia symptoms, providing a natural comparator for the additive effects of caregiving on top of living with an individual with dementia symptoms.
Methods: Census records, providing information on household structure, intensity of caregiving, presence of dementia symptoms and self-reported mental health, were linked to mortality records over the following 33 months. Multi-level regression models were constructed to determine the risk of poor mental health and death in co-resident caregivers of individuals with dementia symptoms compared to co-resident non-caregivers, adjusting for the clustering of individuals within households.
Results: The cohort consisted of 10,982 co-residents (55.1% caregivers), with 12.1% of non-caregivers reporting poor mental health compared to 8.4% of intense caregivers (>20 hours of care per week). During follow-up the cohort experienced 560 deaths (245 to caregivers). Overall, caregiving co-residents were at no greater risk of poor mental health but had lower mortality risk than non-caregiving co-residents (ORadj=0.93, 95% CI 0.79, 1.10 and ORadj=0.67, 95% CI 0.56, 0.81, respectively); this lower mortality risk was also seen amongst the most intensive caregivers (ORadj=0.65, 95% CI 0.53, 0.79).
Conclusion: Caregiving poses no additional risk to mental health over and above the risk associated with merely living with someone with dementia, and is associated with a lower mortality risk compared to non-caregiving co-residents.
Resumo:
We present the first 3D simulation of the last minutes of oxygen shell burning in an 18 solar mass supernova progenitor up to the onset of core collapse. A moving inner boundary is used to accurately model the contraction of the silicon and iron core according to a 1D stellar evolution model with a self-consistent treatment of core deleptonization and nuclear quasi-equilibrium. The simulation covers the full solid angle to allow the emergence of large-scale convective modes. Due to core contraction and the concomitant acceleration of nuclear burning, the convective Mach number increases to ~0.1 at collapse, and an l=2 mode emerges shortly before the end of the simulation. Aside from a growth of the oxygen shell from 0.51 to 0.56 solar masses due to entrainment from the carbon shell, the convective flow is reasonably well described by mixing length theory, and the dominant scales are compatible with estimates from linear stability analysis. We deduce that artificial changes in the physics, such as accelerated core contraction, can have precarious consequences for the state of convection at collapse. We argue that scaling laws for the convective velocities and eddy sizes furnish good estimates for the state of shell convection at collapse and develop a simple analytic theory for the impact of convective seed perturbations on shock revival in the ensuing supernova. We predict a reduction of the critical luminosity for explosion by 12--24% due to seed asphericities for our 3D progenitor model relative to the case without large seed perturbations.