996 resultados para Decreasing Scale
Resumo:
In this paper, we present a new unified approach and an elementary proof of a very general theorem on the existence of a semicontinuous or continuous utility function representing a preference relation. A simple and interesting new proof of the famous Debreu Gap Lemma is given. In addition, we prove a new Gap Lemma for the rational numbers and derive some consequences. We also prove a theorem which characterizes the existence of upper semicontinuous utility functions on a preordered topological space which need not be second countable. This is a generalization of the classical theorem of Rader which only gives sufficient conditions for the existence of an upper semicontinuous utility function for second countable topological spaces. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Aims. The aims of this study were to assess the internal reliability (internal consistency), construct validity, sensitivity and ceiling and floor effects of the Brazilian-Portuguese version of the Impact of Event Scale (IES). Design. Methodological research design. Method. The Brazilian-Portuguese version of the IES was applied to a group of 91 burned patients at three times: the first week after the burn injury (time one), between the fourth and the sixth months (time two) and between the ninth and the 12th months (time three). The internal consistency, construct validity (convergent and dimensionality), sensitivity and ceiling and floor effects were tested. Results. Cronbach`s alpha coefficients showed high internal consistency for the total scale (0 center dot 87) and for the domains intrusive thoughts (0 center dot 87) and avoidance responses (0 center dot 76). During the hospitalisation (time one), the scale showed low and positive correlations with pain measures immediately before (r = 0 center dot 22; p < 0 center dot 05) and immediately after baths and dressings (r = 0 center dot 21; p < 0 center dot 05). After the discharge, we found strong and negative correlations with self-esteem (r = -0 center dot 52; p < 0 center dot 01), strong and positive with depression (r = 0 center dot 63; p < 0 center dot 01) and low and negative with the Bodily pain (r = -0 center dot 24; p < 0 center dot 05), Social functioning (r = -0 center dot 34; p < 0 center dot 01) and Mental health (r = -0 center dot 27; p < 0 center dot 05) domains of the SF-36 at time two. Regarding the sensitivity, no statistically significant differences were observed between mean scale scores according to burned body surface (p = 0 center dot 21). The floor effect was observed in most of the IES items. Conclusion. The adapted version of the scale showed to be reliable and valid to assess postburn reactions on the impact of the event in the group of patients under analysis. Relevance to clinical practice. The Impact of Event Scale can be used in research and clinical practice to assess nursing interventions aimed at decreasing stress during rehabilitation.
Resumo:
Background: Pain and anxiety are a common problem in all recovery phases after a burn. The Burns Specific Pain Anxiety Scale (BSPAS) was proposed to assess anxiety in burn patients related to painful procedures. Objectives: To assess internal consistency, discriminative construct validity, dimensionality and convergent construct validity of the Brazilian-Portuguese version of the Burns Specific Pain Anxiety Scale. Design: In this cross-sectional study, the original version of the BSPAS, adapted into Brazilian Portuguese, was tested for internal consistency (Cronbach`s Alpha), discriminative validity (related to total body surface area burned and sex), dimensionality (through factor analysis), and convergent construct validity (applying the Visual Analogue Scale for pain and State-Anxiety-STAI) in a group of 91 adult burn patients. Results: The adapted version of the BSPAS displayed a moderate and positive correlation with pain assessments: immediately before baths and dressings (r = 0.32; p < 0.001), immediately after baths and dressings (r = 0.31; p < 0.001) and during the relaxation period (r= 0.31; p < 0.001) and with anxiety assessments (r = 0.34; p < 0.001). No statistically significant differences were observed when comparing the mean of the adapted version of the BSPAS scores with sex (p = 0.194) and total body surface area burned (p = 0.162) (discriminative validity). The principal components analysis applied to our sample seems to confirm anxiety as one single domain of the Brazilian-Portuguese version of the BSPAS. Cronbach`s Alpha showed high internal consistency of the adapted version of the scale (0.90). Conclusion: The Brazilian-Portuguese version of the BSPAS 9-items has shown statically acceptable levels of reliability and validity for pain-related anxiety evaluation in burn patients. This scale can be used to assess nursing interventions aimed at decreasing pain and anxiety related to the performance of painful procedures. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Online data collection is becoming increasingly common and has some advantages compared to traditional paper-and-pencil formats, such as reducing loss of data, increasing participants' privacy, and decreasing the effect of social desirability. However, the validity and reliability of this administration format must be established before results can be considered acceptable. The aim of this study was to evaluate the validity, reliability, and equivalence of paper-and-pencil and online versions of the Weight Concerns Scale (WCS) when applied to Brazilian university students. A crossover design was used, and the Portuguese version of the WCS (in both paper-and-pencil and online formats) was completed by 100 college students. The results indicated adequate fit in both formats. The simultaneous fit of data for both groups was excellent, with strong invariance between models. Adequate convergent validity, internal consistency, and mean score equivalence of the WCS in both formats were observed. Thus, the WCS presented adequate reliability and validity in both administration formats, with equivalence/stability between answers.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The Mississippi Valley-type (MVT) Pb-Zn ore district at Mezica is hosted by Middle to Upper Triassic platform carbonate rocks in the Northern Karavanke/Drau Range geotectonic units of the Eastern Alps, northeastern Slovenia. The mineralization at Mezica covers an area of 64 km(2) with more than 350 orebodies and numerous galena and sphalerite occurrences, which formed epigenetically, both conformable and discordant to bedding. While knowledge on the style of mineralization has grown considerably, the origin of discordant mineralization is still debated. Sulfur stable isotope analyses of 149 sulfide samples from the different types of orebodies provide new insights on the genesis of these mineralizations and their relationship. Over the whole mining district, sphalerite and galena have delta(34)S values in the range of -24.7 to -1.5% VCDT (-13.5 +/- 5.0%) and -24.7 to -1.4% (-10.7 +/- 5.9%), respectively. These values are in the range of the main MVT deposits of the Drau Range. All sulfide delta(34)S values are negative within a broad range, with delta(34)S(pyrite) < delta(34)S(sphalerite) < delta(34)S(galena) for both conformable and discordant orebodies, indicating isotopically heterogeneous H(2)S in the ore-forming fluids and precipitation of the sulfides at thermodynamic disequilibrium. This clearly supports that the main sulfide sulfur originates from bacterially mediated reduction (BSR) of Middle to Upper Triassic seawater sulfate or evaporite sulfate. Thermochemical sulfate reduction (TSR) by organic compounds contributed a minor amount of (34)S-enriched H(2)S to the ore fluid. The variations of delta(34)S values of galena and coarse-grained sphalerite at orefield scale are generally larger than the differences observed in single hand specimens. The progressively more negative delta(34)S values with time along the different sphalerite generations are consistent with mixing of different H(2)S sources, with a decreasing contribution of H(2)S from regional TSR, and an increase from a local H(2)S reservoir produced by BSR (i.e., sedimentary biogenic pyrite, organo-sulfur compounds). Galena in discordant ore (-11.9 to -1.7%; -7.0 +/- 2.7%, n=12) tends to be depleted in (34)S compared with conformable ore (-24.7 to -2.8%, -11.7 +/- 6.2%, n=39). A similar trend is observed from fine-crystalline sphalerite I to coarse open-space filling sphalerite II. Some variation of the sulfide delta(34)S values is attributed to the inherent variability of bacterial sulfate reduction, including metabolic recycling in a locally partially closed system and contribution of H(2)S from hydrolysis of biogenic pyrite and thermal cracking of organo-sulfur compounds. The results suggest that the conformable orebodies originated by mixing of hydrothermal saline metal-rich fluid with H(2)S-rich pore waters during late burial diagenesis, while the discordant orebodies formed by mobilization of the earlier conformable mineralization.
Resumo:
1. Digital elevation models (DEMs) are often used in landscape ecology to retrieve elevation or first derivative terrain attributes such as slope or aspect in the context of species distribution modelling. However, DEM-derived variables are scale-dependent and, given the increasing availability of very high-resolution (VHR) DEMs, their ecological relevancemust be assessed for different spatial resolutions. 2. In a study area located in the Swiss Western Alps, we computed VHR DEMs-derived variables related to morphometry, hydrology and solar radiation. Based on an original spatial resolution of 0.5 m, we generated DEM-derived variables at 1, 2 and 4 mspatial resolutions, applying a Gaussian Pyramid. Their associations with local climatic factors, measured by sensors (direct and ambient air temperature, air humidity and soil moisture) as well as ecological indicators derived fromspecies composition, were assessed with multivariate generalized linearmodels (GLM) andmixed models (GLMM). 3. Specific VHR DEM-derived variables showed significant associations with climatic factors. In addition to slope, aspect and curvature, the underused wetness and ruggedness indices modelledmeasured ambient humidity and soilmoisture, respectively. Remarkably, spatial resolution of VHR DEM-derived variables had a significant influence on models' strength, with coefficients of determination decreasing with coarser resolutions or showing a local optimumwith a 2 mresolution, depending on the variable considered. 4. These results support the relevance of using multi-scale DEM variables to provide surrogates for important climatic variables such as humidity, moisture and temperature, offering suitable alternatives to direct measurements for evolutionary ecology studies at a local scale.
Resumo:
In the theoretical part, the different polymerisation catalysts are introduced and the phenomena related to mixing in the stirred tank reactor are presented. Also the advantages and challenges related to scale-up are discussed. The aim of the applied part was to design and implement an intermediate-sized reactor useful for scale-up studies. The reactor setting was tested making one batch of Ziegler–Natta polypropylene catalyst. The catalyst preparation with a designed equipment setting succeeded and the catalyst was analysed. The analyses of the catalyst were done, because the properties of the catalyst were compared to the normal properties of Ziegler–Natta polypropylene catalyst. The total titanium content of the catalyst was slightly higher than in normal Ziegler–Natta polypropylene catalyst, but the magnesium and aluminium content of the catalyst were in the normal level. By adjusting the siphonation tube and adding one washing step the titanium content of the catalyst could be decreased. The particle size of the catalyst was small, but the activity was in a normal range. The size of the catalyst particles could be increased by decreasing the stirring speed. During the test run, it was noticed that some improvements for the designed equipment setting could be done. For example more valves for the chemical feed line need to be added to ensure inert conditions during the catalyst preparation. Also nitrogen for the reactor needs to separate from other nitrogen line. With this change the pressure in the reactor can be kept as desired during the catalyst preparation. The proposals for improvements are presented in the applied part. After these improvements are done, the equipment setting is ready for start-up. The computational fluid dynamics model for the designed reactor was provided by cooperation with Lappeenranta University of Technology. The experiments showed that for adequate mixing with one impeller, stirring speed of 600 rpm is needed. The computational fluid dynamics model with two impellers showed that there was no difference in the mixing efficiency if the upper impeller were pumping downwards or upwards.
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Successful scale-up of human embryonic stem cell production in a stirred microcarrier culture system
Resumo:
Future clinical applications of human embryonic stem (hES) cells will require high-yield culture protocols. Currently, hES cells are mainly cultured in static tissue plates, which offer a limited surface and require repeated sub-culturing. Here we describe a stirred system with commercial dextran-based microcarriers coated with denatured collagen to scale-up hES cell production. Maintenance of pluripotency in the microcarrier-based stirred system was shown by immunocytochemical and flow cytometry analyses for pluripotency-associated markers. The formation of cavitated embryoid bodies expressing markers of endoderm, ectoderm and mesoderm was further evidence of maintenance of differentiation capability. Cell yield per volume of medium spent was more than 2-fold higher than in static plates, resulting in a significant decrease in cultivation costs. A total of 10(8) karyotypically stable hES cells were obtained from a unitary small vessel that needed virtually no manipulation during cell proliferation, decreasing risks of contamination. Spinner flasks are available up to working volumes in the range of several liters. If desired, samples from the homogenous suspension can be withdrawn to allow process validation needed in the last expansion steps prior to transplantation. Especially when thinking about clinical trials involving from dozens to hundreds of patients, the use of a small number of larger spinners instead of hundreds of plates or flasks will be beneficial. To our knowledge, this is the first description of successful scale-up of feeder- and Matrigel™-free production of undifferentiated hES cells under continuous agitation, which makes this system a promising alternative for both therapy and research needs.
Resumo:
Magnetic clouds are a class of interplanetary coronal mass ejections (CME) predominantly characterised by a smooth rotation in the magnetic field direction, indicative of a magnetic flux rope structure. Many magnetic clouds, however, also contain sharp discontinuities within the smoothly varying magnetic field, suggestive of narrow current sheets. In this study we present observations and modelling of magnetic clouds with strong current sheet signatures close to the centre of the apparent flux rope structure. Using an analytical magnetic flux rope model, we demonstrate how such current sheets can form as a result of a cloud’s kinematic propagation from the Sun to the Earth, without any external forces or influences. This model is shown to match observations of four particular magnetic clouds remarkably well. The model predicts that current sheet intensity increases for increasing CME angular extent and decreasing CME radial expansion speed. Assuming such current sheets facilitate magnetic reconnection, the process of current sheet formation could ultimately lead a single flux rope becoming fragmented into multiple flux ropes. This change in topology has consequences for magnetic clouds as barriers to energetic particle propagation.
Resumo:
We study cartel stability in a differentiated price-setting duopoly with returns to scale. We show that a cartel may be equally stable in the presence of lower differentiation, provided that the decreasing returns parameter is high. In addition we demonstrate that for a given factor of discount, there are technologies that can have decreasing returns to scale where the cartel always is stable independent of the differentiation degree.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.