924 resultados para J22 - Time Allocation and Labor Supply
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Química - IQ
Resumo:
OBJECTIVE: The aim of this study was to assess the time spent for direct (DBB - direct bracket bonding) and indirect (IBB - indirect bracket bonding) bracket bonding techniques. The time length of laboratorial (IBB) and clinical steps (DBB and IBB) as well as the prevalence of loose bracket after a 24-week follow-up were evaluated. METHODS: Seventeen patients (7 men and 10 women) with a mean age of 21 years, requiring orthodontic treatment were selected for this study. A total of 304 brackets were used (151 DBB and 153 IBB). The same bracket type and bonding material were used in both groups. Data were submitted to statistical analysis by Wilcoxon non-parametric test at 5% level of significance. RESULTS: Considering the total time length, the IBB technique was more time-consuming than the DBB (p < 0.001). However, considering only the clinical phase, the IBB took less time than the DBB (p < 0.001). There was no significant difference (p = 0.910) for the time spent during laboratorial positioning of the brackets and clinical session for IBB in comparison to the clinical procedure for DBB. Additionally, no difference was found as for the prevalence of loose bracket between both groups. CONCLUSION: the IBB can be suggested as a valid clinical procedure since the clinical session was faster and the total time spent for laboratorial positioning of the brackets and clinical procedure was similar to that of DBB. In addition, both approaches resulted in similar frequency of loose bracket.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.
Resumo:
This thesis is concerned with in-situ time-, temperature- and pressure-resolved synchrotron X-ray powder diffraction investigations of a variety of inorganic compounds with twodimensional layer structures and three-dimensional framework structures. In particular, phase stability, reaction kinetics, thermal expansion and compressibility at non-ambient conditions has been studied for 1) Phosphates with composition MIV(HPO4)2·nH2O (MIV = Ti, Zr); 2) Pyrophosphates and pyrovanadates with composition MIVX2O7 (MIV = Ti, Zr and X = P, V); 3) Molybdates with composition ZrMo2O8. The results are compiled in seven published papers and two manuscripts. Reaction kinetics for the hydrothermal synthesis of α-Ti(HPO4)2·H2O and intercalation of alkane diamines in α-Zr(HPO4)2·H2O was studied using time-resolved experiments. In the high-temperature transformation of γ-Ti(PO4)(H2PO4)·2H2O to TiP2O7 three intermediate phases, γ'-Ti(PO4)(H2PO4)·(2-x)H2O, β-Ti(PO4)(H2PO4) and Ti(PO4)(H2P2O7)0.5 were found to crystallise at 323, 373 and 748 K, respectively. A new tetragonal three-dimensional phosphate phase called τ-Zr(HPO4)2 was prepared, and subsequently its structure was determined and refined using the Rietveld method. In the high-temperature transformation from τ-Zr(HPO4)2 to cubic α-ZrP2O7 two new orthorhombic intermediate phases were found. The first intermediate phase, ρ-Zr(HPO4)2, forms at 598 K, and the second phase, β-ZrP2O7, at 688 K. Their respective structures were solved using direct methods and refined using the Rietveld method. In-situ high-pressure studies of τ-Zr(HPO4)2 revealed two new phases, tetragonal ν-Zr(HPO4)2 and orthorhombic ω-Zr(HPO4)2 that crystallise at 1.1 and 8.2 GPa. The structure of ν-Zr(HPO4)2 was solved and refined using the Rietveld method. The high-pressure properties of the pyrophosphates ZrP2O7 and TiP2O7, and the pyrovanadate ZrV2O7 were studied up to 40 GPa. Both pyrophosphates display smooth compression up to the highest pressures, while ZrV2O7 has a phase transformation at 1.38 GPa from cubic to pseudo-tetragonal β-ZrV2O7 and becomes X-ray amorphous at pressures above 4 GPa. In-situ high-pressure studies of trigonal α-ZrMo2O8 revealed the existence of two new phases, monoclinic δ-ZrMo2O8 and triclinic ε-ZrMo2O8 that crystallises at 1.1 and 2.5 GPa, respectively. The structure of δ-ZrMo2O8 was solved by direct methods and refined using the Rietveld method.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.
Resumo:
Analisi e sviluppo di procedure di importazione dati per un integratore di annunci immobiliari dedicato alla vendita di soggiorni turistici in case vacanza. Il documento tratta inoltre l'implementazione di un Web Service conforme all'architettura RESTful per l'accesso e l'esportazione dei dati a soggetti terzi autorizzati tramite Digest Authentication.
Resumo:
High Performance Computing e una tecnologia usata dai cluster computazionali per creare sistemi di elaborazione che sono in grado di fornire servizi molto piu potenti rispetto ai computer tradizionali. Di conseguenza la tecnologia HPC e diventata un fattore determinante nella competizione industriale e nella ricerca. I sistemi HPC continuano a crescere in termini di nodi e core. Le previsioni indicano che il numero dei nodi arrivera a un milione a breve. Questo tipo di architettura presenta anche dei costi molto alti in termini del consumo delle risorse, che diventano insostenibili per il mercato industriale. Un scheduler centralizzato non e in grado di gestire un numero di risorse cosi alto, mantenendo un tempo di risposta ragionevole. In questa tesi viene presentato un modello di scheduling distribuito che si basa sulla programmazione a vincoli e che modella il problema dello scheduling grazie a una serie di vincoli temporali e vincoli sulle risorse che devono essere soddisfatti. Lo scheduler cerca di ottimizzare le performance delle risorse e tende ad avvicinarsi a un profilo di consumo desiderato, considerato ottimale. Vengono analizzati vari modelli diversi e ognuno di questi viene testato in vari ambienti.
Resumo:
Failing cerebral blood flow (CBF) autoregulation may contribute to cerebral damage after traumatic brain injury (TBI). The purpose of this study was to describe the time course of CO(2)-dependent vasoreactivity, measured as CBF velocity in response to hyperventilation (vasomotor reactivity [VMR] index). We included 13 patients who had had severe TBI, 8 of whom received norepinephrine (NE) based on clinical indication. In these patients, measurements were also performed after dobutamine administration, with a goal of increasing cardiac output by 30%. Blood flow velocity was measured with transcranial Doppler ultrasound in both hemispheres. All patients except one had an abnormal VMR index in at least one hemisphere within the first 24 h after TBI. In those patients who did not receive catecholamines, mean VMR index recovered within the first 48 to 72 h. In contrast, in patients who received NE within the first 48 h period, VMR index did not recover on the second day. Cardiac output and mean CBF velocity increased significantly during dobutamine administration, but VMR index did not change significantly. In conclusion, CO(2) vasomotor reactivity was abnormal in the first 24 h after TBI in most of the patients, but recovered within 48 h in those patients who did not receive NE, in contrast to those eventually receiving the drug. Addition of dobutamine to NE had variable but overall insignificant effects on CO(2) vasomotor reactivity.