765 resultados para sample complexity
Resumo:
OBJECTIVES: 1 - Verify the prevalence of depressive symptoms in first to fourth-year medical students using the Beck Depression Inventory (BDI). 2 - Establish correlations between target factors and higher or lower BDI scores. 3 - Investigate the relationship between the prevalence of depressive symptoms and the demand for psychological care offered by the Centro Universitário Lusíada. METHOD: Cross-sectional study of 290 first to fourth-year medical students; implementation of the BDI, socio-demographic survey, and evaluation of satisfaction with progress. RESULTS: The study sample was 59% female and 41% male. Mean BDI was 6.3 (SD 5.8). Overall prevalence of depressive symptoms was 23.1%. The following associations were statistically significant (p<0.05): among students for whom the course failed to meet original expectations, who were dissatisfied with the course, or who came from the interior of the State (20.5%, 12.5%, and 24.4% of the total sample, respectively), for 40%, 36.1% and 36.4%, respectively, the BDI was consistent with some degree of depression. CONCLUSION: The study showed that there is higher prevalence of depressive symptoms in medical students than in the general population
Resumo:
A company’s competence to manage its product portfolio complexity is becoming critically important in the rapidly changing business environment. The continuous evolvement of customer needs, the competitive market environment and internal product development lead to increasing complexity in product portfolios. The companies that manage the complexity in product development are more profitable in the long run. The complexity derives from product development and management processes where the new product variant development is not managed efficiently. Complexity is managed with modularization which is a method that divides the product structure into modules. In modularization, it is essential to take into account the trade-off between the perceived customer value and the module or component commonality across the products. Another goal is to enable the product configuration to be more flexible. The benefits are achieved through optimizing complexity in module offering and deriving the new product variants more flexibly and accurately. The developed modularization process includes the process steps for preparation, mapping the current situation, the creation of a modular strategy and implementing the strategy. Also the organization and support systems have to be adapted to follow-up targets and to execute modularization in practice.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Disturbing Whiteness: The Complexity of White Female Identity in Selected Works by Joyce Carol Oates
Resumo:
The mechanical harvesting is an important stage in the production process of soybeans and, in this process; the loss of a significant number of grains is common. Despite the existence of mechanisms to monitor these losses, it is still essential to use sampling methods to quantify them. Assuming that the size of the sample area affects the reliability and variability between samples in quantifying losses, this paper aimed to analyze the variability and feasibility of using different sizes of sample area (1, 2 and 3 m²) in quantifying losses in the mechanical harvesting of soybeans. Were sampled 36 sites and the cutting losses, losses by other mechanisms of the combine and total losses were evaluated, as well as the water content in seeds, straw distribution and crop productivity. Data were subjected to statistical analysis (descriptive statistics and analysis of variance) and Statistical Control Process (SCP). The coefficients of variation were similar for the three frames available. Combine losses showed stable behavior, whereas cutting losses and total losses showed unstable behavior. The frame size did not affect the quantification and variability of losses in the mechanical harvesting of soybeans, thus a frame of 1 m² can be used for determining losses.
Resumo:
ABSTRACT This study aimed to compare thematic maps of soybean yield for different sampling grids, using geostatistical methods (semivariance function and kriging). The analysis was performed with soybean yield data in t ha-1 in a commercial area with regular grids with distances between points of 25x25 m, 50x50 m, 75x75 m, 100x100 m, with 549, 188, 66 and 44 sampling points respectively; and data obtained by yield monitors. Optimized sampling schemes were also generated with the algorithm called Simulated Annealing, using maximization of the overall accuracy measure as a criterion for optimization. The results showed that sample size and sample density influenced the description of the spatial distribution of soybean yield. When the sample size was increased, there was an increased efficiency of thematic maps used to describe the spatial variability of soybean yield (higher values of accuracy indices and lower values for the sum of squared estimation error). In addition, more accurate maps were obtained, especially considering the optimized sample configurations with 188 and 549 sample points.
Resumo:
PURPOSE: To establish reference values for the first trimester uterine artery resistance index (UtA-RI) and pulsatility index (UtA-PI) in healthy singleton pregnant women from Northeast Brazil. METHODS: A prospective observational cohort study including 409 consecutive singleton pregnancies undergoing routine early ultrasound screening at 11 - 14 weeks of gestation was performed. The patients responded to a questionnaire to assess maternal epidemiological characteristics. The left and right UtA-PI and UtA-RI were examined by color and pulsed Doppler by transabdominal technique and the mean UtA-PI, mean UtA-RI and the presence of bilateral protodiastolic notching were recorded. Quartile regression was used to estimate reference values. RESULTS: The mean±standard deviation UtA-RI and UtA-PI were 0.7±0.1 and 1.5±0.5, respectively. When segregated for gestation age, mean UtA-PI was 1.6±0.5 at 11 weeks, 1.5±0.6 at 12 weeks, 1.4±0.4 at 13 weeks and 1.3±0.4 at 14 weeks' gestation and mean UtA-RI was 0.7±0.1 at 11 weeks, 0.7±0.1 at 12 weeks, 0.6±0.1 at 13 weeks and 0.6±0.1 at 14 weeks' gestation. Uterine artery bilateral notch was present in 261 (63.8%) patients. We observed that the 5th and 95th percentiles of the UtA-PI and UtA-RI uterine arteries were 0.7 and 2.3 and, 0.5 and 0.8, respectively. CONCLUSION: Normal reference range of uterine artery Doppler in healthy singleton pregnancies from Northeast Brazil was established. The 95th percentile of UtA-PI and UtA-RI values may serve as a cut-off for future prediction of pregnancy complications studies (i.e., pre-eclampsia) in Northeast Brazil.
Resumo:
This thesis describes an approach to overcoming the complexity of software product management (SPM) and consists of several studies that investigate the activities and roles in product management, as well as issues related to the adoption of software product management. The thesis focuses on organizations that have started the adoption of SPM but faced difficulties due to its complexity and fuzziness and suggests the frameworks for overcoming these challenges using the principles of decomposition and iterative improvements. The research process consisted of three phases, each of which provided complementary results and empirical observation to the problem of overcoming the complexity of SPM. Overall, product management processes and practices in 13 companies were studied and analysed. Moreover, additional data was collected with a survey conducted worldwide. The collected data were analysed using the grounded theory (GT) to identify the possible ways to overcome the complexity of SPM. Complementary research methods, like elements of the Theory of Constraints were used for deeper data analysis. The results of the thesis indicate that the decomposition of SPM activities depending on the specific characteristics of companies and roles is a useful approach for simplifying the existing SPM frameworks. Companies would benefit from the results by adopting SPM activities more efficiently and effectively and spending fewer resources on its adoption by concentrating on the most important SPM activities.
Resumo:
A three degree of freedom model of the dynamic mass at the middle of a test sample, resembling a Stockbridge neutraliser, is introduced. This model is used to identify the hereby called equivalent complex cross section flexural stiffness (ECFS) of the beam element which is part of the whole test sample. This ECFS, once identified, gives the effective cross section flexural stiffness of the beam as well as its effective damping, measured as the loss factor of an equivalent viscoelastic beam. The beam element of the test sample may be of any complexity, such as a segment of stranded cable of the ACSR type. These data are important parameters for the design of overhead power transmission lines and other cable structures. A cost function is defined and used in the identification of the ECFS. An experiment, designed to measure the dynamic masses of two test samples, is described. Experimental and identified results are presented and discussed.
Resumo:
A three degree of freedom model of the dynamic mass at the middle of a test sample, resembling a Stockbridge neutraliser, is introduced. This model is used to identify the hereby called equivalent complex cross section flexural stiffness (ECFS) of the beam element which is part of the whole test sample. This ECFS, once identified, gives the effective cross section flexural stiffness of the beam as well as its effective damping, measured as the loss factor of an equivalent viscoelastic beam. The beam element of the test sample may be of any complexity, such as a segment of stranded cable of the ACSR type. These data are important parameters for the design of overhead power transmission lines and other cable structures. A cost function is defined and used in the identification of the ECFS. An experiment, designed to measure the dynamic masses of two test samples, is described. Experimental and identified results are presented and discussed.
Resumo:
Complex System is any system that presents involved behavior, and is hard to be modeled by using the reductionist approach of successive subdivision, searching for ''elementary'' constituents. Nature provides us with plenty of examples of these systems, in fields as diverse as biology, chemistry, geology, physics, and fluid mechanics, and engineering. What happens, in general, is that for these systems we have a situation where a large number of both attracting and unstable chaotic sets coexist. As a result, we can have a rich and varied dynamical behavior, where many competing behaviors coexist. In this work, we present and discuss simple mechanical systems that are nice paradigms of Complex System, when they are subjected to random external noise. We argue that systems with few degrees of freedom can present the same complex behavior under quite general conditions.
Resumo:
The objectives of this study were to evaluate baby corn yield, green corn yield, and grain yield in corn cultivar BM 3061, with weed control achieved via a combination of hoeing and intercropping with gliricidia, and determine how sample size influences weed growth evaluation accuracy. A randomized block design with ten replicates was used. The cultivar was submitted to the following treatments: A = hoeings at 20 and 40 days after corn sowing (DACS), B = hoeing at 20 DACS + gliricidia sowing after hoeing, C = gliricidia sowing together with corn sowing + hoeing at 40 DACS, D = gliricidia sowing together with corn sowing, and E = no hoeing. Gliricidia was sown at a density of 30 viable seeds m-2. After harvesting the mature ears, the area of each plot was divided into eight sampling units measuring 1.2 m² each to evaluate weed growth (above-ground dry biomass). Treatment A provided the highest baby corn, green corn, and grain yields. Treatment B did not differ from treatment A with respect to the yield values for the three products, and was equivalent to treatment C for green corn yield, but was superior to C with regard to baby corn weight and grain yield. Treatments D and E provided similar yields and were inferior to the other treatments. Therefore, treatment B is a promising one. The relation between coefficient of experimental variation (CV) and sample size (S) to evaluate growth of the above-ground part of the weeds was given by the equation CV = 37.57 S-0.15, i.e., CV decreased as S increased. The optimal sample size indicated by this equation was 4.3 m².
Resumo:
The main objective of the present study was to evaluate the diagnostic value (clinical application) of brain measures and cognitive function. Alzheimer and multiinfarct patients (N = 30) and normal subjects over the age of 50 (N = 40) were submitted to a medical, neurological and cognitive investigation. The cognitive tests applied were Mini-Mental, word span, digit span, logical memory, spatial recognition span, Boston naming test, praxis, and calculation tests. The brain ratios calculated were the ventricle-brain, bifrontal, bicaudate, third ventricle, and suprasellar cistern measures. These data were obtained from a brain computer tomography scan, and the cutoff values from receiver operating characteristic curves. We analyzed the diagnostic parameters provided by these ratios and compared them to those obtained by cognitive evaluation. The sensitivity and specificity of cognitive tests were higher than brain measures, although dementia patients presented higher ratios, showing poorer cognitive performances than normal individuals. Normal controls over the age of 70 presented higher measures than younger groups, but similar cognitive performance. We found diffuse losses of tissue from the central nervous system related to distribution of cerebrospinal fluid in dementia patients. The likelihood of case identification by functional impairment was higher than when changes of the structure of the central nervous system were used. Cognitive evaluation still seems to be the best method to screen individuals from the community, especially for developing countries, where the cost of brain imaging precludes its use for screening and initial assessment of dementia.
Resumo:
The reasons for the inconsistent association between salt consumption and blood pressure levels observed in within-society surveys are not known. A total of 157 normotensive subjects aged 18 to 35 years, selected at random in a cross-sectional population-based survey, answered a structured questionnaire. They were classified as strongly predisposed to hypertension when two or more first-degree relatives had a diagnosis of hypertension. Anthropometric parameters were obtained and sitting blood pressure was determined with aneroid sphygmomanometers. Sodium and potassium excretion was measured by flame spectrophotometry in an overnight urine sample. A positive correlation between blood pressure and urinary sodium excretion was detected only in the group of individuals strongly predisposed to hypertension, both for systolic blood pressure (r = 0.51, P<0.01) and diastolic blood pressure (r = 0.50, P<0.01). In a covariance analysis, after controlling for age, skin color and body mass index, individuals strongly predisposed to hypertension who excreted amounts of sodium above the median of the entire sample had higher systolic and diastolic blood pressure than subjects classified into the remaining conditions. The influence of familial predisposition to hypertension on the association between salt intake and blood pressure may be an additional explanation for the weak association between urinary sodium excretion and blood pressure observed in within-population studies, since it can influence the association between salt consumption and blood pressure in some but not all inhabitants.