905 resultados para Two-stage stochastic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atlantic menhaden, Brrvoortia tyrannus, the object of a major purse-seine fishery along the U.S. east coast, are landed at plants from northern Florida to central Maine. The National Marine Fisheries Service has sampled these landings since 1955 for length, weight, and age. Together with records of landings at each plant, the samples are used to estimate numbers of fish landed at each age. This report analyzes the sampling design in terms of probablity sampling theory. The design is c1assified as two-stage cluster sampling, the first stage consisting of purse-seine sets randomly selected from the population of all sets landed, and the second stage consisting of fish randomly selected from each sampled set. Implicit assumptions of this design are discussed with special attention to current sampling procedures. Methods are developed for estimating mean fish weight, numbers of fish landed, and age composition of the catch, with approximate 95% confidence intervals. Based on specific results from three ports (port Monmouth, N.J., Reedville, Va., and Beaufort, N.C.) for the 1979 fishing season, recommendations are made for improving sampling procedures to comply more exactly with assumptions of the sampling design. These recommendatlons include adopting more formal methods for randomizing set and fish selection, increasing the number of sets sampled, considering the bias introduced by unequal set sizes, and developing methods to optimize the use of funds and personnel. (PDF file contains 22 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: Methods of collecting samples for the purpose of estimating the numbers and weights of fish caught, by length interval, are described. Several models for two-stage sampling are described, and the equations for the estimators and their variances are given. The results from a brief simulation study are used to show the differences between estimates made with the different models. Estimators for the average weights of fish in the catch and their variances are also described. These average weights are used to provide improved estimates of the total annual catches of yellowfin taken from the eastern Pacific Ocean, east of 150°W, between 1955 and 1990. SPANISH: Se describen los métodos de recoger de muestreo para estimar el número o peso de peces capturados, por intervalo de talla. Se describen varios modelos para el muestreo de dos etapas, y se presentan las ecuaciones para los estimadores y sus varianzas. Se usan los resultados de un breve estudio de simulación para indicar las diferencias entre estimaciones realizadas con los distintosmodelos. También se describe un estimador para el peso promedio de peces en la captura y su varianza. Se usan estos estimadores para calcular estimaciones mejoradas de las capturas anuales totales de aleta amarilla tomadas del Océano Pacífico oriental, al este de 150°W, entre 1955 y 1990. (PDF contains 41 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide a model that bridges the gap between two benchmark models of strategic network formation: Jackson and Wolinsky' s model based on bilateral formation of links, and Bala and Goyal's two-way fl ow model, where links can be unilaterally formed. In the model introduced and studied here a link can be created unilaterally. When it is only supported by one of the two players the fl ow through the link suffers a certain decay, but when it is supported by both the fl ow runs without friction. When the decay in links supported by only one player is maximal (i.e. there is no flow) we have Jackson and Wolinsky 's connections model without decay, while when flow in such links is perfect we have Bala and Goyal' s two-way flow model. We study Nash, strict Nash and pairwise stability for the intermediate models. Efficiency and dynamics are also examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Co-management is a system or a process in which responsibility and authority for the management of common resources is shared between the state, local users of the resources as well as other stakeholders, and where they have the legal authority to administer the resource jointly. Co-management has received increasing attention in recent years as a potential strategy for managing fisheries. This paper presents and discusses results of a survey undertaken in the Kenyan part of Lake Victoria to assess the conditions - behaviour, attitude and characteristics of resource users, as well as community institutions - that can support co-management. It analyses the results of this survey with respect to a series of parameters, identified by Pinkerton (1989), as necessary preconditions for the successful inclusion of communities involvement in resource management. The survey was implemented through a two-stage stratified random sampling technique based on district and beach size strata. A total of 405 fishers, drawn from 25 fish landing beaches, were interviewed using a structured questionnaire. The paper concludes that while Kenya's lake Victoria fishery would appear to qualify for a number of these preconditions, it would appear that it fails to qualify in others. Preconditions in this latter category include the definition of boundaries in fishing grounds, community members' rights to the resource, delegation and legislation of local responsibility and authority. Additional work is required to further elaborate and understand these shortcomings

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports that the tunable self-phase-stabilized infrared laser pulses have been generated from a two-stage optical parametric amplifier. With an 800 nm pump source, the output idler pulses are tunable from 1.3 mu m to 2.3 mu m, and the maximum output energy of the idler pulses is higher than 1 mJ at 1.6 mu m by using 6 mJ pump laser. A carrier-envelope phase fluctuation of similar to 0.15 rad (rms) for the idler pulses is measured for longer than one hour by using a home build f-to-2f interferometer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A direct twos-complement parallel array multiplication algorithm is introduced and modified for digital optical numerical computation. The modified version overcomes the problems encountered in the conventional optical twos-complement algorithm. In the array, all the summands are generated in parallel, and the relevant summands having the same weights are added simultaneously without carries, resulting in the product expressed in a mixed twos-complement system. In a two-stage array, complex multiplication is possible with using four real subarrays. Furthermore, with a three-stage array architecture, complex matrix operation is straightforwardly accomplished. In the experiment, parallel two-stage array complex multiplication with liquid-crystal panels is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Copolímeros casca-núcleo de poli(acrilato de butila) (núcleo) e poliestireno (casca) foram sintetizados por meio de polimerização em emulsão, conduzida em duas etapas. A adição de ácido itacônico como monômero funcional na polimerização do núcleo foi realizada para verificar seu efeito sobre suas propriedades mecânicas e de processamento. Os copolímeros foram caracterizados por espalhamento dinâmico de luz (DLS), microscopia eletrônica de transmissão (MET), cromatografia de exclusão por tamanho (SEC), espectrometria na região do infravermelho (FTIR) e calorimetria diferencial por varredura (DSC). A incorporação do monômero funcional foi confirmada por DSC e quantificada por titulação. A proporção de poli(acrilato de butila) e poliestireno influenciou diretamente o processamento e as propriedades mecânicas do polímero. Os copolímeros com teores de poliestireno acima de 50% foram processados por compressão e extrusão a temperatura ambiente, apresentando comportamento baroplástico. A presença do monômero funcional não alterou o processamento do polímero e melhorou significativamente sua resistência à tração, aumentando sua tenacidade

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I. Proton Magnetic Resonance of Polynucleotides and Transfer RNA.

Proton magnetic resonance was used to follow the temperature dependent intramolecular stacking of the bases in the polynucleotides of adenine and cytosine. Analysis of the results on the basis of a two state stacked-unstacked model yielded values of -4.5 kcal/mole and -9.5 kcal/mole for the enthalpies of stacking in polyadenylic and polycytidylic acid, respectively.

The interaction of purine with these molecules was also studied by pmr. Analysis of these results and the comparison of the thermal unstacking of polynucleotides and short chain nucleotides indicates that the bases contained in stacks within the long chain poly nucleotides are, on the average, closer together than the bases contained in stacks in the short chain nucleotides.

Temperature and purine studies were also carried out with an aqueous solution of formylmethionine transfer ribonucleic acid. Comparison of these results with the results of similar experiments with the homopolynucleotides of adenine, cytosine and uracil indicate that the purine is probably intercalating into loop regions of the molecule.

The solvent denaturation of phenylalanine transfer ribonucleic acid was followed by pmr. In a solvent mixture containing 83 volume per cent dimethylsulf oxide and 17 per cent deuterium oxide, the tRNA molecule is rendered quite flexible. It is possible to resolve resonances of protons on the common bases and on certain modified bases.

Part II. Electron Spin Relaxation Studies of Manganese (II) Complexes in Acetonitrile.

The electron paramagnetic resonance spectra of three Mn+2 complexes, [Mn(CH3CN)6]+2, [MnCl4]-2, and [MnBr4]-2, in acetonitrile were studied in detail. The objective of this study was to relate changes in the effective spin Hamiltonian parameters and the resonance line widths to the structure of these molecular complexes as well as to dynamical processes in solution.

Of the three systems studied, the results obtained from the [Mn(CH3CN)6]+2 system were the most straight-forward to interpret. Resonance broadening attributable to manganese spin-spin dipolar interactions was observed as the manganese concentration was increased.

In the [MnCl4]-2 system, solvent fluctuations and dynamical ion-pairing appear to be significant in determining electron spin relaxation.

In the [MnBr4]-2 system, solvent fluctuations, ion-pairing, and Br- ligand exchange provide the principal means of electron spin relaxation. It was also found that the spin relaxation in this system is dependent upon the field strength and is directly related to the manganese concentration. A relaxation theory based on a two state collisional model was developed to account for the observed behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eventos ou estímulos no início da vida podem afetar o desenvolvimento do indivíduo; dentre esses o tabagismo materno. A exposição materna isolada à nicotina, principal componente do cigarro, causa na prole alterações metabólicas, em curto e longo prazo, como aumento da adiposidade, resistência à leptina, e disfunção tireoideana e adrenal. Entretanto é sabido que na fumaça de cigarro estão presentes outros componentes com potenciais efeitos tóxicos. Assim propomos comparar o efeito de duas formas de exposição neonatal à fumaça do cigarro sobre o perfil endócrino-metabólico da prole em curto e longo prazo. Para isso, no 3 dia após o nascimento, ratos lactentes foram submetidos a dois modelos: Modelo I (exposição pelo leite materno), ninhadas separadas em: exposição à fumaça (EF; n=8) lactantes expostas à fumaça de cigarros 2R1F (1,7 mg de nicotina/cigarro por 1h, 4 vezes ao dia), separadas de suas proles e grupo controle (C; n=8), onde as mães foram separadas de suas proles e expostas ao ar filtrado; Modelo II (exposição direta à fumaça), ninhadas separadas em: exposição à fumaça (EF; n=8) mães e proles expostas à fumaça de cigarros 2R1F e controle (C; n=8) mães e proles expostas ao ar filtrado. A exposição ao tabaco ocorreu até o desmame. Mães sacrificadas aos desmame e proles aos desmame e aos 180 dias de idade. As mães lactantes expostas à fumaça (EF) apresentaram hipoleptinemia (-46%), hiperprolactinemia (+50%), hipoinsulinemia (-40%) e diminuição de triglicérides (-53%). Quanto a composição bioquímica do leite, as lactantes EF mostraram aumento de lactose (+52%) e triglicérides (+78%). No modelo I, as proles EF apresentaram ao desmame: diminuição da gordura corporal total (-24%), aumento de proteína corporal total (+17%), diminuição da glicemia (-11%), hiperinsulinemia (+28%), hipocorticosteronemia (-40%) e aumento de triglicérides (+34%). Quando adultas, as proles EF apresentaram somente alteração da função adrenal onde observou-se menor conteúdo de catecolaminas (-50%) e da expressão de tirosina hidroxilase na medula adrenal (-56%). No modelo II, as proles EF aos 21 dias exibiram diminuição da MC (-7%), do comprimento nasoanal (-5%), da gordura retroperitoneal (-59%), da área dos adipócitos viscerais (-60%) com maior área dos adipócitos subcutâneos (+95%), aumento do T4 (+59), da corticosterona (+60%), do conteúdo adrenal de catecolaminas (+58%) e hipoinsulinemia (-29%). Aos 180 dias, as proles EF apresentaram aumento da ingestão alimentar (+10%), dos depósitos de gordura visceral (~60%) e conteúdo de gordura corporal total (+50%), menor área dos adipócitos subcutâneos (-24%), aumento da leptina (+85%), glicemia (+11%), adiponectina (+1.4x), T3 (+71%), T4 (+57%) e TSH (+36,5%) com menor corticosterona (-41%) e catecolaminas adrenais (-57%) e aumento dos triglicerídeos (+65%). Somados nossos dados evidenciam o impacto negativo que a exposição à fumaça do cigarro tem sobre o desenvolvimento do neonato com o surgimento de desordens endócrinas futuras, que ocorrem pelo menos em parte devido às alterações observadas nas mães. Portanto, independente da forma de exposição, seja via aleitamento materno ou por inalação direta, é de grande importância alertar a sociedade sobre as possíveis complicações metabólicas em longo prazo decorrente da exposição involuntária do neonato ao tabagismo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cirurgias ortognáticas bimaxilares representam um desafio para os cirurgiões, especialmente para reproduzir o plano de tratamento na sala operatória. O uso de guias cirúrgicos permite uma melhor reprodução do planejamento, mas, para isto, uma técnica precisa de cirurgia de modelos é essencial. O objetivo deste estudo é comparar a precisão do reposicionamento mandibular obtido com dois diferentes métodos de cirurgia de modelos utilizados para o planejamento de cirurgias bimaxilares com a seqüência cirúrgica invertida. Neste estudo, um crânio de resina foi utilizado para simular um paciente. As moldagens foram tomadas e os modelos foram vazados e montados em um articulador semi-ajustável por meio da transferência do arco-facial e do registro de mordida em cera. Traçados de previsão de 10 planos de tratamento diferentes foram feitos no software Dolphin Imaging e, então, reproduzidos com o método padrão (CM I) e método modificado (CM II) de cirurgia de modelos (T1). Para aprimorar a avaliação do reposicionamento mandibular, as cirurgias de modelo foram repetidas após um mês (T2). Os modelos mandibulares foram medidos na Plataforma de Erickson antes e depois do reposicionamento para contrastar os resultados. As diferenças no tempo de reposicionamento também foram registradas. Estatística descritiva e teste t foram usados para análisar os dados e comparar os resultados. Este estudo sugere que o reposicionamento vertical e látero-lateral dos modelos mandibulares foram semelhantes com ambos os métodos, entretanto, houve uma maior imprecisão no sentido ântero-posterior quando o método padrão de cirurgia modelos foi utilizado para o planejamento de cirurgias ortognáticas com a seqüência invertida. O tempo necessário para reposicionar o modelo mandibular no articulador semi-ajustável com a abordagem modificada (CM II) foi significativamente menor do que para reposicionar o modelo maxilar na Plataforma de Erickson (CM I).