976 resultados para Third-order correlation
Resumo:
This study provides the first spatially detailed and complete inventory of Ambrosia pollen sources in Italy – the third largest centre of ragweed in Europe. The inventory relies on a well tested top-down approach that combines local knowledge, detailed land cover, pollen observations and a digital elevation model that assumes permanent ragweed populations mainly grow below 745m. The pollen data were obtained from 92 volumetric pollen traps located throughout Italy during 2004-2013. Land cover is derived from Corine Land cover information with 100m resolution. The digital elevation model is based on the NASA shuttle radar mission with 90m resolution. The inventory is produced using a combination of ArcGIS and Python for automation and validated using cross-correlation and has a final resolution of 5km x 5km. The method includes a harmonization of the inventory with other European inventories for the Pannonian Plain, France and Austria in order to provide a coherent picture of all major ragweed sources. The results show that the mean annual pollen index varies from 0 in South Italy to 6779 in the Po Valley. The results also show that very large pollen indexes are observed in the Milan region, but this region has smaller amounts of ragweed habitats compared to other parts of the Po Valley and known ragweed areas in France and the Pannonian Plain. A significant decrease in Ambrosia pollen concentrations was recorded in 2013 by pollen monitoring stations located in the Po Valley, particularly in the Northwest of Milan. This was the same year as the appearance of the Ophraella communa leaf beetle in Northern Italy. These results suggest that ragweed habitats near to the Milan region have very high densities of Ambrosia plants compared to other known ragweed habitats in Europe. The Milan region therefore appears to contain habitats with the largest ragweed infestation in Europe, but a smaller amount of habitats is a likely cause the pollen index to be lower compared to central parts of the Pannonian Plain. A low number of densely packed habitats may have increased the impact of the Ophraella beetle and might account for the documented decrease in airborne Ambrosia pollen levels, an event that cannot be explained by meteorology alone. Further investigations that model atmospheric pollen before and after the appearance of the beetle in this part of Northern Italy are needed to assess the influence of the beetle on airborne Ambrosia pollen concentrations. Future work will focus on short distance transport episodes for stations located in the Po Valley, and long distance transport events for stations in Central Italy that exhibit peaks in daily airborne Ambrosia pollen levels.
Resumo:
The article analyses the (third) Coleman Report on private and public schools. The report scrutinises the relationship between private and public schools and shows that private school students show better academic achievement. Coleman concluded that these findings provided a strong argument in favour of public financial support for private schools. However, he identified a number of school characteristics that he believed to be related to student achievement. According to his analysis, these characteristics were not limited to private schools; public schools exhibiting the same characteristics also had good results. Coleman interpreted the available data in favour of financial aid to private schools, although this was not the only possible interpretation. An alternative conclusion would have been to encourage these characteristics in public schools. Why did Coleman disregard this possibility? Why did he deviate from his usual scientific rigour? The present article suggests that there appear to be two reasons for the narrow interpretation of the relationship between public and private schools in Coleman's third report. The first lies in Coleman's notion of contemporary society as a constructed system in which every individual actor holds a place in the structure and requires incentives in order to act to the benefit of society. In the case of education, the goal of the institution is to ensure the high cognitive achievement of students, and the incentive is related to choice and competition. The second reason is related to Coleman's vision of sociology as a discipline aiding the construction of an effective society. (DIPF/Orig.)
Resumo:
In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.
Resumo:
Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-of-plane blade deflections shows good agreement between DIC results and aeroelastic simulations.
Resumo:
Privity of contract has lately been criticized in several European jurisdictions, particu-larly due to the onerous consequences it gives rise to in arrangements typical for the modern exchange such as chains of contracts. Privity of contract is a classical premise of contract law, which prohibits a third party to acquire or enforce rights under a contract to which he is not a party. Such a premise is usually seen to be manifested in the doctrine of privity of contract developed under common law, however, the jurisdictions of continental Europe do recognize a corresponding starting point in contract law. One of the traditional industry sectors affected by this premise is the construction industry. A typical large construction project includes a contractual chain comprised of an employer, a main contractor and a subcontractor. The employer is usually dependent on the subcontractor's performance, however, no contractual nexus exists between the two. Accordingly, the employer might want to circumvent the privity of contract in order to reach the subcontractor and to mitigate any risks imposed by such a chain of contracts. From this starting point, the study endeavors to examine the concept of privity of con-tract in European jurisdictions and particularly the methods used to circumvent the rule in the construction industry practice. For this purpose, the study employs both a com-parative and a legal dogmatic method. The principal aim is to discover general principles not just from a theoretical perspective, but from a practical angle as well. Consequently, a considerable amount of legal praxis as well as international industry forms have been used as references. The most important include inter alia the model forms produced by FIDIC as well as Olli Norros' doctoral thesis "Vastuu sopimusketjussa". According to the conclusions of this study, the four principal ways to circumvent privity of contract in European construction projects include liability in a chain of contracts, collateral contracts, assignment of rights as well as security instruments. The contempo-rary European jurisdictions recognize these concepts and the references suggest that they are an integral part of the current market practice. Despite the fact that such means of circumventing privity of contract raise a number of legal questions and affect the risk position of particularly a subcontractor considerably, it seems that the impairment of the premise of privity of contract is an increasing trend in the construction industry.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
Abstract: Quantitative Methods (QM) is a compulsory course in the Social Science program in CEGEP. Many QM instructors assign a number of homework exercises to give students the opportunity to practice the statistical methods, which enhances their learning. However, traditional written exercises have two significant disadvantages. The first is that the feedback process is often very slow. The second disadvantage is that written exercises can generate a large amount of correcting for the instructor. WeBWorK is an open-source system that allows instructors to write exercises which students answer online. Although originally designed to write exercises for math and science students, WeBWorK programming allows for the creation of a variety of questions which can be used in the Quantitative Methods course. Because many statistical exercises generate objective and quantitative answers, the system is able to instantly assess students’ responses and tell them whether they are right or wrong. This immediate feedback has been shown to be theoretically conducive to positive learning outcomes. In addition, the system can be set up to allow students to re-try the problem if they got it wrong. This has benefits both in terms of student motivation and reinforcing learning. Through the use of a quasi-experiment, this research project measured and analysed the effects of using WeBWorK exercises in the Quantitative Methods course at Vanier College. Three specific research questions were addressed. First, we looked at whether students who did the WeBWorK exercises got better grades than students who did written exercises. Second, we looked at whether students who completed more of the WeBWorK exercises got better grades than students who completed fewer of the WeBWorK exercises. Finally, we used a self-report survey to find out what students’ perceptions and opinions were of the WeBWorK and the written exercises. For the first research question, a crossover design was used in order to compare whether the group that did WeBWorK problems during one unit would score significantly higher on that unit test than the other group that did the written problems. We found no significant difference in grades between students who did the WeBWorK exercises and students who did the written exercises. The second research question looked at whether students who completed more of the WeBWorK exercises would get significantly higher grades than students who completed fewer of the WeBWorK exercises. The straight-line relationship between number of WeBWorK exercises completed and grades was positive in both groups. However, the correlation coefficients for these two variables showed no real pattern. Our third research question was investigated by using a survey to elicit students’ perceptions and opinions regarding the WeBWorK and written exercises. Students reported no difference in the amount of effort put into completing each type of exercise. Students were also asked to rate each type of exercise along six dimensions and a composite score was calculated. Overall, students gave a significantly higher score to the written exercises, and reported that they found the written exercises were better for understanding the basic statistical concepts and for learning the basic statistical methods. However, when presented with the choice of having only written or only WeBWorK exercises, slightly more students preferred or strongly preferred having only WeBWorK exercises. The results of this research suggest that the advantages of using WeBWorK to teach Quantitative Methods are variable. The WeBWorK system offers immediate feedback, which often seems to motivate students to try again if they do not have the correct answer. However, this does not necessarily translate into better performance on the written tests and on the final exam. What has been learned is that the WeBWorK system can be used by interested instructors to enhance student learning in the Quantitative Methods course. Further research may examine more specifically how this system can be used more effectively.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The objective of this study was to evaluate the association of visual scores of body structure, precocity and muscularity with production (body weight at 18 months and average daily gain) and reproductive (scrotal circumference) traits in Brahman cattle in order to determine the possible use of these scores as selection criteria to improve carcass quality. Covariance components were estimated by the restricted maximum likelihood method using an animal model that included contemporary group as fixed effect. A total of 1,116 observations of body structure, precocity and muscularity were used. Heritability was 0.39, 043 and 0.40 for body structure, precocity and muscularity, respectively. The genetic correlations were 0.79 between body structure and precocity, 0.87 between body structure and muscularity, and 0.91 between precocity and muscularity. The genetic correlations between visual scores and body weight at 18 months were positive (0.77, 0.57 and 0.59 for body structure, precocity and muscularity, respectively). Similar genetic correlations were observed between average daily gain and visual scores (0.60, 0.57 and 0.48, respectively), whereas the genetic correlations between scrotal circumference and these scores were low (0.13, 0.02, and 0.13). The results indicate that visual scores can be used as selection criteria in Brahman breeding programs. Favorable correlated responses should be seen in average daily gain and body weight at 18 months. However, no correlated response is expected for scrotal circumference.
Resumo:
Network Intrusion Detection Systems (NIDS) are computer systems which monitor a network with the aim of discerning malicious from benign activity on that network. While a wide range of approaches have met varying levels of success, most IDSs rely on having access to a database of known attack signatures which are written by security experts. Nowadays, in order to solve problems with false positive alerts, correlation algorithms are used to add additional structure to sequences of IDS alerts. However, such techniques are of no help in discovering novel attacks or variations of known attacks, something the human immune system (HIS) is capable of doing in its own specialised domain. This paper presents a novel immune algorithm for application to the IDS problem. The goal is to discover packets containing novel variations of attacks covered by an existing signature base.
Resumo:
A investigação na área da saúde e a utilização dos seus resultados tem funcionado como base para a melhoria da qualidade de cuidados, exigindo dos profissionais de saúde conhecimentos na área específica onde desempenham funções, conhecimentos em metodologia de investigação que incluam as técnicas de observação, técnicas de recolha e análise de dados, para mais facilmente serem leitores capacitados dos resultados da investigação. Os profissionais de saúde são observadores privilegiados das respostas humanas à saúde e à doença, podendo contribuir para o desenvolvimento e bem-estar dos indivíduos muitas vezes em situações de grande vulnerabilidade. Em saúde infantil e pediatria o enfoque está nos cuidados centrados na família privilegiando-se o desenvolvimento harmonioso da criança e jovem, valorizando os resultados mensuráveis em saúde que permitam determinar a eficácia das intervenções e a qualidade de saúde e de vida. No contexto pediátrico realçamos as práticas baseadas na evidência, a importância atribuída à pesquisa e à aplicação dos resultados da investigação nas práticas clínicas, assim como o desenvolvimento de instrumentos de mensuração padronizados, nomeadamente as escalas de avaliação, de ampla utilização clínica, que facilitam a apreciação e avaliação do desenvolvimento e da saúde das crianças e jovens e resultem em ganhos em saúde. A observação de forma sistematizada das populações neonatais e pediátricas com escalas de avaliação tem vindo a aumentar, o que tem permitido um maior equilíbrio na avaliação das crianças e também uma observação baseada na teoria e nos resultados da investigação. Alguns destes aspetos serviram de base ao desenvolvimento deste trabalho que pretende dar resposta a 3 objetivos fundamentais. Para dar resposta ao primeiro objetivo, “Identificar na literatura científica, os testes estatísticos mais frequentemente utilizados pelos investigadores da área da saúde infantil e pediatria quando usam escalas de avaliação” foi feita uma revisão sistemática da literatura, que tinha como objetivo analisar artigos científicos cujos instrumentos de recolha de dados fossem escalas de avaliação, na área da saúde da criança e jovem, desenvolvidas com variáveis ordinais, e identificar os testes estatísticos aplicados com estas variáveis. A análise exploratória dos artigos permitiu-nos verificar que os investigadores utilizam diferentes instrumentos com diferentes formatos de medida ordinal (com 3, 4, 5, 7, 10 pontos) e tanto aplicam testes paramétricos como não paramétricos, ou os dois em simultâneo, com este tipo de variáveis, seja qual for a dimensão da amostra. A descrição da metodologia nem sempre explicita se são cumpridas as assunções dos testes. Os artigos consultados nem sempre fazem referência à distribuição de frequência das variáveis (simetria/assimetria) nem à magnitude das correlações entre os itens. A leitura desta bibliografia serviu de suporte à elaboração de dois artigos, um de revisão sistemática da literatura e outro de reflexão teórica. Apesar de terem sido encontradas algumas respostas às dúvidas com que os investigadores e os profissionais, que trabalham com estes instrumentos, se deparam, verifica-se a necessidade de desenvolver estudos de simulação que confirmem algumas situações reais e alguma teoria já existente, e trabalhem outros aspetos nos quais se possam enquadrar os cenários reais de forma a facilitar a tomada de decisão dos investigadores e clínicos que utilizam escalas de avaliação. Para dar resposta ao segundo objetivo “Comparar a performance, em termos de potência e probabilidade de erro de tipo I, das 4 estatísticas da MANOVA paramétrica com 2 estatísticas da MANOVA não paramétrica quando se utilizam variáveis ordinais correlacionadas, geradas aleatoriamente”, desenvolvemos um estudo de simulação, através do Método de Monte Carlo, efetuado no Software R. O delineamento do estudo de simulação incluiu um vetor com 3 variáveis dependentes, uma variável independente (fator com três grupos), escalas de avaliação com um formato de medida com 3, 4, 5, e 7 pontos, diferentes probabilidades marginais (p1 para distribuição simétrica, p2 para distribuição assimétrica positiva, p3 para distribuição assimétrica negativa e p4 para distribuição uniforme) em cada um dos três grupos, correlações de baixa, média e elevada magnitude (r=0.10, r=0.40, r=0.70, respetivamente), e seis dimensões de amostras (n=30, 60, 90, 120, 240, 300). A análise dos resultados permitiu dizer que a maior raiz de Roy foi a estatística que apresentou estimativas de probabilidade de erro de tipo I e de potência de teste mais elevadas. A potência dos testes apresenta comportamentos diferentes, dependendo da distribuição de frequência da resposta aos itens, da magnitude das correlações entre itens, da dimensão da amostra e do formato de medida da escala. Tendo por base a distribuição de frequência, considerámos três situações distintas: a primeira (com probabilidades marginais p1,p1,p4 e p4,p4,p1) em que as estimativas da potência eram muito baixas, nos diferentes cenários; a segunda situação (com probabilidades marginais p2,p3,p4; p1,p2,p3 e p2,p2,p3) em que a magnitude das potências é elevada, nas amostras com dimensão superior ou igual a 60 observações e nas escalas com 3, 4,5 pontos e potências de magnitude menos elevada nas escalas com 7 pontos, mas com a mesma ma magnitude nas amostras com dimensão igual a 120 observações, seja qual for o cenário; a terceira situação (com probabilidades marginais p1,p1,p2; p1,p2,p4; p2,p2,p1; p4,p4,p2 e p2,p2,p4) em que quanto maiores, a intensidade das correlações entre itens e o número de pontos da escala, e menor a dimensão das amostras, menor a potência dos testes, sendo o lambda de Wilks aplicado às ordens mais potente do que todas as outra s estatísticas da MANOVA, com valores imediatamente a seguir à maior raiz de Roy. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com dimensão superior a 90 observações (com correlações de baixa e média magnitude), entre as variáveis dependentes nas escalas com 3, 4 e 5 pontos; e superiores a 240 observações, para correlações de baixa intensidade, nas escalas com 7 pontos. No estudo de simulação e tendo por base a distribuição de frequência, concluímos que na primeira situação de simulação e para os diferentes cenários, as potências são de baixa magnitude devido ao facto de a MANOVA não detetar diferenças entre grupos pela sua similaridade. Na segunda situação de simulação e para os diferentes cenários, a magnitude das potências é elevada em todos os cenários cuja dimensão da amostra seja superior a 60 observações, pelo que é possível aplicar testes paramétricos. Na terceira situação de simulação, e para os diferentes cenários quanto menor a dimensão da amostra e mais elevada a intensidade das correlações e o número de pontos da escala, menor a potência dos testes, sendo a magnitude das potências mais elevadas no teste de Wilks aplicado às ordens, seguido do traço de Pillai aplicado às ordens. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com maior dimensão e correlações de baixa e média magnitude. Para dar resposta ao terceiro objetivo “Enquadrar os resultados da aplicação da MANOVA paramétrica e da MANOVA não paramétrica a dados reais provenientes de escalas de avaliação com um formato de medida com 3, 4, 5 e 7 pontos, nos resultados do estudo de simulação estatística” utilizaram-se dados reais que emergiram da observação de recém-nascidos com a escala de avaliação das competências para a alimentação oral, Early Feeding Skills (EFS), o risco de lesões da pele, com a Neonatal Skin Risk Assessment Scale (NSRAS), e a avaliação da independência funcional em crianças e jovens com espinha bífida, com a Functional Independence Measure (FIM). Para fazer a análise destas escalas foram realizadas 4 aplicações práticas que se enquadrassem nos cenários do estudo de simulação. A idade, o peso, e o nível de lesão medular foram as variáveis independentes escolhidas para selecionar os grupos, sendo os recém-nascidos agrupados por “classes de idade gestacional” e por “classes de peso” as crianças e jovens com espinha bífida por “classes etárias” e “níveis de lesão medular”. Verificou-se um bom enquadramento dos resultados com dados reais no estudo de simulação.
Resumo:
Background: Celiac disease is an immune-mediated inflammation of the small intestine caused by sensitivity to dietary gluten in genetically sensitive individuals. Objectives: In this study, we aimed to evaluate the predictive value of tissue transglutaminase (tTG) antibodies for the diagnosis of celiac disease in a pediatric population in order to determine if duodenal biopsy can be avoided. Patients and Methods: The subjects were selected among individuals with probable celiac disease, referring to a gastrointestinal clinic. After physical examinations and performing tissue transglutaminase-immunoglobulin A (tTG-IgA) tests, upper endoscopy was performed if serological titer was higher than 18 IU/mL. Therapy started according to pathologic results. Results: The sample size was calculated to be 121 subjects (69 female and 52 male subjects); the average age of subjects was 8.4 years. A significant association was found between serological titer and pathologic results; in other words, subjects with high serological titer had more positive pathologic results for celiac disease, compared to others (P < 0.001). Maximum sensitivity (65%) and specificity (65.4%) were achieved at a serological titer of 81.95 IU/ml; the calculated accuracy was lower in comparison with other studies. As the results indicated, lower antibody titer was observed in patients with failure to gain weight and higher antibody titer was reported in diabetic patients. Conclusions: As the results indicated, a single serological test (tTg-IgA test) was not sufficient for avoiding intestinal biopsy.
Resumo:
Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.
Resumo:
Dissertação de Mestrado apresentada no ISPA – Instituto Universitário para obtenção do grau de Mestre em Psicologia especialidade de Psicologia Clínica.
Resumo:
The relative role of drift versus selection underlying the evolution of bacterial species within the gut microbiota remains poorly understood. The large sizes of bacterial populations in this environment suggest that even adaptive mutations with weak effects, thought to be the most frequently occurring, could substantially contribute to a rapid pace of evolutionary change in the gut. We followed the emergence of intra-species diversity in a commensal Escherichia coli strain that previously acquired an adaptive mutation with strong effect during one week of colonization of the mouse gut. Following this first step, which consisted of inactivating a metabolic operon, one third of the subsequent adaptive mutations were found to have a selective effect as high as the first. Nevertheless, the order of the adaptive steps was strongly affected by a mutational hotspot with an exceptionally high mutation rate of 10-5. The pattern of polymorphism emerging in the populations evolving within different hosts was characterized by periodic selection, which reduced diversity, but also frequency-dependent selection, actively maintaining genetic diversity. Furthermore, the continuous emergence of similar phenotypes due to distinct mutations, known as clonal interference, was pervasive. Evolutionary change within the gut is therefore highly repeatable within and across hosts, with adaptive mutations of selection coefficients as strong as 12% accumulating without strong constraints on genetic background. In vivo competitive assays showed that one of the second steps (focA) exhibited positive epistasis with the first, while another (dcuB) exhibited negative epistasis. The data shows that strong effect adaptive mutations continuously recur in gut commensal bacterial species.