883 resultados para One and many


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the various strategies to reduce the incidence of non-communicable diseases reduction of sodium intake in the general population has been recognized as one of the most cost-effective means because of its potential impact on the development of hypertension and cardiovascular diseases. Yet, this strategic health recommendation of the WHO and many other international organizations is far from being universally accepted. Indeed, there are still several unresolved scientific and epidemiological questions that maintain an ongoing debate. Thus what is the adequate low level of sodium intake to recommend to the general population and whether national strategies should be oriented to the overall population or only to higher risk fractions of the population such as salt-sensitive patients are still discussed. In this paper, we shall review the recent results of the literature regarding salt, blood pressure and cardiovascular risk and we present the recommendations recently proposed by a group of experts of Switzerland. The propositions of the participating medical societies are to encourage national health authorities to continue their discussion with the food industry in order to reduce the sodium intake of food products with a target of mean salt intake of 5-6 grams per day in the population. Moreover, all initiatives to increase the information on the effect of salt on health and on the salt content of food are supported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present thesis was to explore different aspects of decision making and expertise in investigations of child sexual abuse (CSA) and subsequently shed some light on the reasons for shortcomings in the investigation processes. Clinicians’ subjective attitudes as well as scientifically based knowledge concerning CSA, CSA investigation and interviewing were explored. Furthermore the clinicians’ own view on their expertise and what enhances this expertise was investigated. Also, the effects of scientific knowledge, experience and attitudes on the decision making in a case of CSA were explored. Finally, the effects of different kinds of feedback as well as experience on the ability to evaluate CSA in the light of children’s behavior and base rates were investigated. Both explorative and experimental methods were used. The purpose of Study I was to investigate whether clinicians investigating child sexual abuse (CSA) rely more on scientific knowledge or on clinical experience when evaluating their own expertise. Another goal was to check what kind of beliefs the clinicians held. The connections between these different factors were investigated. A questionnaire covering items concerning demographic data, experience, knowledge about CSA, selfevaluated expertise and beliefs about CSA was given to social workers, child psychiatrists and psychologists working with children. The results showed that the clinicians relied more on their clinical experience than on scientific knowledge when evaluating their expertise as investigators of CSA. Furthermore, social workers possessed stronger attitudes in favor of children than the other groups, while child psychiatrists had more negative attitudes towards the criminal justice system. Male participants held less strong beliefs than female participants. The findings indicate that the education of CSA investigators should focus more on theoretical knowledge and decision making processes as well as the role of beliefs In Study II school and family counseling psychologists completed a Child Sexual Abuse Attitude and Belief Scale. Four CSA related attitude and belief subscales were identified: 1. The Disclosure subscale reflecting favoring a disclosure at any cost, 2. The Pro-Child subscale reflecting unconditional belief in children's reports, 3. The Intuition subscale reflecting favoring an intuitive approach to CSA investigations, and 4. The Anti Criminal Justice System subscale reflecting negative attitudes towards the legal system. Beliefs that were erroneous according to empirical research were analyzed separately. The results suggest that some psychologists hold extreme attitudes and many erroneous beliefs related to CSA. Some misconceptions are common. Female participants tended to hold stronger attitudes than male participants. The more training in interviewing children the participants have, the more erroneous beliefs and stronger attitudes they hold. Experience did not affect attitudes and beliefs. In Study III mental health professionals’ sensitivity to suggestive interviewing in CSA cases was explored. Furthermore, the effects of attitudes and beliefs related to CSA and experience with CSA investigations on the sensitivity to suggestive influences in the interview were investigated. Also, the effect of base rate estimates of CSA on decisions was examined. A questionnaire covering items concerning demographic data, different aspects of clinical experience, self-evaluated expertise, beliefs and knowledge about CSA and a set of ambiguous material based on real trial documents concerning an alleged CSA case was given to child mental health professionals. The experiment was based on a 2 x 2 x 2 x 2 (leading questions: yes vs no) x (stereotype induction: yes vs no) x (emotional tone: pressure to respond vs no pressure to respond) x (threats and rewards: yes vs no) between-subjects factorial design, in which the suggestiveness of the methods with which the responses of the child were obtained were varied. There was an additional condition in which the material did not contain any interview transcripts. The results showed that clinicians are sensitive only to the presence of leading questions but not to the presence of other suggestive techniques. Furthermore, the clinicians were not sensitive to the possibility that suggestive techniques could have been used when no interview transcripts had been included in the trial material. Experience had an effect on the sensitivity of the clinicians only regarding leading questions. Strong beliefs related to CSA lessened the sensitivity to leading questions. Those showing strong beliefs on the belief scales used in this study were even more prone to prosecute than other participants when other suggestive influences than leading questions were present. Controversy exists regarding effects of experience and feedback on clinical decision making. In Study IV the impact of the number of handled cases and of feedback on the decisions in cases of alleged CSA was investigated. One-hundred vignettes describing cases of suspected CSA were given to students with no experience with investigating CSA. The vignettes were based on statistical data about symptoms and prevalence of CSA. According to the theoretical likelihood of CSA the children described were categorized as abused or not abused. The participants were asked to decide whether abuse had occurred. They were divided into 4 groups: one received feedback on whether their decision was right or wrong, one received information about cognitive processes involved in decision making, one received both, and one did not receive feedback at all. The results showed that participants who received feedback on their performance made more correct positive decisions and participants who got information about decision making processes made more correct negative decisions. Feedback and information combined decreased the number of correct positive decisions but increased the number of correct negative decisions. The number of read cases had in itself a positive effect on correct positive decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste Oil has introduced plant oils and animal fats for the production of NExBTL renewable diesel, and these raw materials differ from the conventional mineral based oils. One subject of new raw materials study is thermal degradation, or in another name pyrolysis, of these organic oils and fats. The aim of this master’s thesis is to increase knowledge on thermal degradation of these new raw materials, and to identify possible gaseous harmful thermal degradation compounds. Another aim is to de-termine the health and environmental hazards of identified compounds. One objective is also to examine the formation possibilities of hazardous compounds in the produc-tion of NExBTL-diesel. Plant oils and animal fats consist mostly of triglycerides. Pyrolysis of triglycerides is a complex phenomenon, and many degradation products can be formed. Based on the literature studies, 13 hazardous degradation products were identified, one of which was acrolein. This compound is very toxic and dangerous to the environment. Own pyrolysis experiments were carried out with rapeseed and palm oils, and with a mix-ture of palm oil and animal fat. At least 12 hazardous compounds, including acrolein, were analysed from the gas phase. According to the experiments, the factors which influence on acrolein formation are the time of the experiment, the sphere (air/hydrogen) in which the experiment is carried out, and the characteristics of the used oil. The production of NExBTL-diesel is not based on pyrolysis. This is why thermal degradation is possible only when abnormal process conditions prevail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spermatogenesis, i.e sperm production in the seminiferous tubules of the testis, is a complex process that takes over one month to complete. Life-long ability of sperm production ultimately lies in a small population of undifferentiated cells, called spermatogonial stem cells (SSCs). These cells give rise to differentiating spermatogonia, which are committed to mature into spermatozoa. SSCs represent a heterogeneous population of cells and many aspects of their basic biology are still unknown. Understanding the mechanisms behind the cell fate decision of these cells is important to gain more insights into the causes of infertility and testis cancer. In addition, an interesting new aspect is the use of testis-derived stem cells in regenerative medicine. Our data demonstrated that adult mouse testis houses a population of Nanog-expressing spermatogonia. Based on mRNA and protein analysis these cells are enriched in stage XII of the mouse seminiferous epithelial cycle. The cells derived from this stage have the highest capacity to give rise to ES cell-like cells which express Oct4 and Nanog. These cells are under tight non- GDNF regulation but their fate can be dictated by activating p21 signalling. Comparative studies suggested that these cells are regulated like ES cells. Taken together these data imply that pluripotent cells are present in the adult mammalian testis. CIP2A (cancerous inhibitor of PP2A) has been associated with tumour aggressiveness and poor prognosis. In the testis it is expressed by the descendants of stem cells, i.e. the spermatogonial progenitor cells. Our data suggest that CIP2A acts upstream of PLZF and is needed for quantitatively normal spermatogenesis. Classification of CIP2A as a cancer/testis gene makes it an attractive target for cancer therapy. Study on the CIP2A deficient mouse model demonstrates that systemic inhibition of CIP2A does not severely interfere with growth and development or tissue or organ function, except for the spermatogenic output. These data demonstrate that CIP2A is required for quantitatively normal spermatogenesis. Hedgehog (Hh) signalling is involved in the development and maintenance of many different tissues and organs. According to our data, Hh signalling is active at many different levels during rat spermatogenesis: in spermatogonia, spermatocytes and late elongating spermatids. Localization of Suppressor of Fused (SuFu), the negative regulator of the pathway, specifically in early elongating spermatids suggests that Hh signalling needs to be shut down in these cells. Introduction of Hh signalling inhibitor resulted in an increase in germ cell apoptosis. Follicle-stimulating hormone (FSH) and inhibition of receptor tyrosine kinases resulted in down-regulation of Hh signalling. These data show that Hh signalling is under endocrine and paracrine control and it promotes germ cell survival.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Maternal diabetes affects many fetal organ systems, including the vasculature and the lungs. The offspring of diabetic mothers have respiratory adaptation problems after birth. The mechanisms are multifactorial and the effects are prolonged during the postnatal period. An increasing incidence of diabetic pregnancies accentuates the importance of identifying the pathological mechanisms, which cause the metabolic and genetic changes that occur in offspring, born to diabetic mothers. Aims and methods: The aim of this thesis was to determine changes both in human umbilical cord exposed to maternal type 1 diabetes and in neonatal rat lungs after streptozotocin-induced maternal hyperglycemia, during pregnancy. Rat lungs were used as a model for the potential disease mechanisms. Gene expression alterations were determined in human umbilical cords at birth and in rat pup lungs at two week of age. During the first two postnatal weeks, rat lung development was studied morphologically and histologically. Further, the effect of postnatal hyperoxia on hyperglycemia-primed rat lungs was investigated at one week of age to mimic the clinical situation of supplemental oxygen treatment. Results: In the umbilical cord, maternal diabetes had a major negative effect on the expression of genes involved in blood vessel development. The genes regulating vascular tone were also affected. In neonatal rat lungs, intrauterine hyperglycemia had a prolonged effect on gene expression during late alveolarization. The most affected pathway was the upregulation of extracellular matrix proteins. Newborn rat lungs exposed to intrauterine hyperglycemia had thinner saccular walls without changes in airspace size, a smaller relative lung weight and lung total tissue area, and increased cellular apoptosis and proliferation compared to control lungs, possibly reflecting an aberrant maturational adaptation. At one and two weeks of age, cell proliferation and secondary crest formation were accelerated in hyperglycemia-exposed lungs. Postnatal hyperoxic exposure, alone caused arrested alveolarization with thin-walled and enlarged alveoli. In contrast, the dual exposure of intrauterine hyperglycemia and postnatal hyperoxia resulted in the phenotype of thick septa together with arrested alveolarization and decreased number of small pulmonary arteries. Conclusions: Maternal diabetic environment seems to alter the umbilical cord gene expression profile of the regulation of vascular development and function. Fetal hyperglycemia may additionally affect the genetic regulation of the postnatal lung development and may actually induce prolonged structural alterations in neonatal lungs together with a modifying effect on the deleterious pulmonary exposure of postnatal hyperoxia. This, combined with the novel human umbilical cord gene data could serve as stepping stones for future therapies to curb developmental aberrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A floristic and structural survey of a natural grassland community was conducted on Morro do Osso, a granitic hill in Porto Alegre, RS, Brazil. Structural data were surveyed in 39 one square meter plots placed over two major grassland areas. An accidental fire has occurred in one of the areas approximately one year prior to our survey, leading to further analysis of parameters differences between sites. The floristic list contains 282 species, whereas the structural survey has found 161 species. Families with highest accumulated importance values were Poaceae, Asteraceae and Fabaceae. The diversity and evenness indexes were 4.51 nats ind-1 and 0.86, respectively. Cluster analysis denoted two groups coinciding with the areas distinguished by the fire disturbance. A similarity analysis between our data and two other data sets from nearby granitic hills resulted in 28% to 35% similarity, with equivalent species-family distribution and many common dominant species, corroborating the concept of a continuous flora along the South Brazilian granitic hills.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Popular science has emphasized the risks of high sodium intake and many studies have confirmed that salt intake is closely related to hypertension. The present mini-review summarizes experiments about salt taste sensitivity and its relationship with blood pressure (BP) and other variables of clinical and familial relevance. Children and adolescents from control parents (N = 72) or with at least one essential hypertensive (EHT) parent (N = 51) were investigated. Maternal questionnaires on eating habits and vomiting episodes were collected. Offspring, anthropometric, BP, and salt taste sensitivity values were recorded and blood samples analyzed. Most mothers declared that they added "little salt" when cooking. Salt taste sensitivity was inversely correlated with systolic BP (SBP) in control youngsters (r = -0.33; P = 0.015). In the EHT group, SBP values were similar to control and a lower salt taste sensitivity threshold. Obese offspring of EHT parents showed higher SBP and C-reactive protein values but no differences in renin-angiotensin-aldosterone system activity. Salt taste sensitivity was correlated with SBP only in the non-obese EHT group (N = 41; r = 0.37; P = 0.02). Salt taste sensitivity was correlated with SBP in healthy, normotensive children and adolescents whose mothers reported significant vomiting during the first trimester (N = 18; r = -0.66; P < 0.005), but not in "non-vomiter offspring" (N = 54; r = -0.18; nonsignificant). There is evidence for a linkage between high blood pressure, salt intake and sensitivity, perinatal environment and obesity, with potential physiopathological implications in humans. This relationship has not been studied comprehensively using homogeneous methods and therefore more research is needed in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SUMMARY Organizational creativity – hegemonic and alternative discourses Over the course of recent developments in the societal and business environment, the concept of creativity has been brought into new arenas. The rise of ‘creative industries’ and the idea of creativity as a form of capital have attracted the interests of business and management professionals – as well as academics. As the notion of creativity has been adopted in the organization studies literature, the concept of organizational creativity has been introduced to refer to creativity that takes place in an organizational context. This doctoral thesis focuses on organizational creativity, and its purpose is to explore and problematize the hegemonic organizational creativity discourse and to provide alternative viewpoints for theorizing about creativity in organizations. Taking a discourse theory approach, this thesis, first, provides an outline of the currently predominant, i.e. hegemonic, discourse on organizational creativity, which is explored regarding themes, perspectives, methods and paradigms. Second, this thesis consists of five studies that act as illustrations of certain alternative viewpoints. Through these exemplary studies, this thesis sheds light on the limitations and taken-for-granted aspects of the hegemonic discourse and discusses what these alternative viewpoints could offer for the understanding of and theorizing for organizational creativity. This study leans on an assumption that the development of organizational creativity knowledge and the related discourse is not inevitable or progressive but rather contingent. The organizational creativity discourse has developed in a certain direction, meaning that some themes, perspectives, and methods, as well as assumptions, values, and objectives, have gained a hegemonic position over others, and are therefore often taken for granted and considered valid and relevant. The hegemonization of certain aspects, however, contributes to the marginalization of others. The thesis concludes that the hegemonic discourse on organizational creativity is based on an extensive coverage of certain themes and perspectives, such as those focusing on individual cognitive processes, motivation, or organizational climate and their relation to creativity, to name a few. The limited focus on some themes and the confinement to certain prevalent perspectives, however, results in the marginalization of other themes and perspectives. The negative, often unintended, consequences, implications, and side effects of creativity, the factors that might hinder or prevent creativity, and a deeper inquiry into the ontology and epistemology of creativity have attracted relatively marginal interest. The material embeddedness of organizational creativity, in other words, the physical organizational environment as well as the human body and its non-cognitive resources, has largely been overlooked in the hegemonic discourse, although thereare studies in this area that give reason to believe that they might prove relevant for the understanding of creativity. The hegemonic discourse is based on an individual-centered understanding of creativity which overattributes creativity to an individual and his/her cognitive capabilities, while simultaneously neglecting how, for instance, the physical environment, artifacts, social dynamics and interactions condition organizational creativity. Due to historical reasons, quantitative as well as qualitative yet functionally- oriented studies have predominated the organizational creativity discourse, although studies falling into the interpretationist paradigm have gradually become more popular. The two radical paradigms, as well as methodological and analytical approaches typical of radical research, can be considered to hold a marginal position in the field of organizational creativity. The hegemonic organizational creativity discourse has provided extensive findings related to many aspects of organizational creativity, although the con- ceptualizations and understandings of organizational creativity in the hegemonic discourse are also in many respects limited and one-sided. The hegemonic discourse is based on an assumption that creativity is desirable, good, necessary, or even obligatory, and should be encouraged and nourished. The conceptualiza- tions of creativity favor the kind of creativity which is useful, valuable and can be harnessed for productivity. The current conceptualization is limited to the type of creativity that is acceptable and fits the managerial ideology, and washes out any risky, seemingly useless, or negative aspects of creativity. It also limits the possible meanings and representations that ‘creativity’ has in the respective discourse, excluding many meanings of creativity encountered in other discourses. The excessive focus on creativity that is good, positive, productive and fits the managerial agenda while ignoring other forms and aspects of creativity, however, contributes to the dilution of the notion. Practices aimed at encouraging the kind of creativity may actually entail a risk of fostering moderate alterations rather than more radical novelty, as well as management and organizational practices which limit creative endeavors, rather than increase their likelihood. The thesis concludes that although not often given the space and attention they deserve, there are alternative conceptualizations and understandings of organizational creativity which embrace a broader notion of creativity. The inability to accommodate the ‘other’ understandings and viewpoints within the organizational creativity discourse runs a risk of misrepresenting the complex and many-sided phenomenon of creativity in organizational context. Keywords: Organizational creativity, creativity, organization studies, discourse theory, hegemony

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are more than 7000 languages in the world, and many of these have emerged through linguistic divergence. While questions related to the drivers of linguistic diversity have been studied before, including studies with quantitative methods, there is no consensus as to which factors drive linguistic divergence, and how. In the thesis, I have studied linguistic divergence with a multidisciplinary approach, applying the framework and quantitative methods of evolutionary biology to language data. With quantitative methods, large datasets may be analyzed objectively, while approaches from evolutionary biology make it possible to revisit old questions (related to, for example, the shape of the phylogeny) with new methods, and adopt novel perspectives to pose novel questions. My chief focus was on the effects exerted on the speakers of a language by environmental and cultural factors. My approach was thus an ecological one, in the sense that I was interested in how the local environment affects humans and whether this human-environment connection plays a possible role in the divergence process. I studied this question in relation to the Uralic language family and to the dialects of Finnish, thus covering two different levels of divergence. However, as the Uralic languages have not previously been studied using quantitative phylogenetic methods, nor have population genetic methods been previously applied to any dialect data, I first evaluated the applicability of these biological methods to language data. I found the biological methodology to be applicable to language data, as my results were rather similar to traditional views as to both the shape of the Uralic phylogeny and the division of Finnish dialects. I also found environmental conditions, or changes in them, to be plausible inducers of linguistic divergence: whether in the first steps in the divergence process, i.e. dialect divergence, or on a large scale with the entire language family. My findings concerning Finnish dialects led me to conclude that the functional connection between linguistic divergence and environmental conditions may arise through human cultural adaptation to varying environmental conditions. This is also one possible explanation on the scale of the Uralic language family as a whole. The results of the thesis bring insights on several different issues in both a local and a global context. First, they shed light on the emergence of the Finnish dialects. If the approach used in the thesis is applied to the dialects of other languages, broader generalizations may be drawn as to the inducers of linguistic divergence. This again brings us closer to understanding the global patterns of linguistic diversity. Secondly, the quantitative phylogeny of the Uralic languages, with estimated times of language divergences, yields another hypothesis as to the shape and age of the language family tree. In addition, the Uralic languages can now be added to the growing list of language families studied with quantitative methods. This will allow broader inferences as to global patterns of language evolution, and more language families can be included in constructing the tree of the world’s languages. Studying history through language, however, is only one way to illuminate the human past. Therefore, thirdly, the findings of the thesis, when combined with studies of other language families, and those for example in genetics and archaeology, bring us again closer to an understanding of human history.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toujours en évolution le droit maritime est constamment en train de se moderniser. Depuis ses débuts Ie droit maritime essaie de s'adapter aux réalités de son temps. Le changement fut lent et difficile à atteindre. Au départ la pratique voulait qu'un transporteur maritime puisse échapper à presque toute responsabilité. L'application des coutumes du domaine et du droit contractuel avait laissé place aux abus et aux inégalités de pouvoir entre transporteurs maritimes et chargeurs/propriétaires de marchandises. La venue du vingtième siècle changea tout. L'adoption des Règles de la Haye, Haye / Nisby et Hambourg a transforme Ie système de transport de marchandise par mer tel qu'on Ie connaissait jusqu'à date. Ainsi une évolution graduelle marqua l'industrie maritime, parallèlement Ie droit maritime se développa considérablement avec une participation judiciaire plus active. De nos jours, les transporteurs maritimes sont plus responsables, or cela n'empêche pas qu'ils ne sont pas toujours capables de livrer leurs cargaisons en bonne condition. Chaque fois qu'un bateau quitte Ie port lui et sa cargaison sont en danger. De par ce fait, des biens sont perdus ou endommages en cours de route sous la responsabilité du transporteur. Malgré les changements et l'évolution dans les opérations marines et l'administration du domaine la réalité demeure telle que Ie transport de marchandise par mer n' est pas garanti it. cent pour cent. Dans les premiers temps, un transporteur maritime encourait toutes sortes de périls durant son voyage. Conséquemment les marchandises étaient exposées aux pertes et dangers en cours de route. Chaque année un grand nombre de navires sont perdu en mer et avec eux la cargaison qu'ils transportent. Toute la modernisation au monde ne peut éliminer les hauts risques auxquels sont exposes les transporteurs et leurs marchandises. Vers la fin des années soixante-dix avec la venue de la convention de Hambourg on pouvait encore constater que Ie nombre de navires qui sont perdus en mer était en croissance. Ainsi même en temps moderne on n'échappe pas aux problèmes du passe. "En moyenne chaque jour un navire de plus de 100 tonneaux se perd corps et biens (ceci veut dire: navire et cargaison) et Ie chiffre croit: 473 en 1978. Aces sinistres majeurs viennent s'ajouter les multiples avaries dues au mauvais temps et les pertes pour de multiples raisons (marquage insuffisant, erreurs de destination...). Ces périls expliquent : (1) le système de responsabilité des transporteurs ; (2) la limitation de responsabilité des propriétaires de navires; ... " L'historique légal du système de responsabilité et d'indemnité des armateurs démontre la difficulté encourue par les cours en essayant d'atteindre un consensus et uniformité en traitant ses notions. Pour mieux comprendre les différentes facettes du commerce maritime il faut avoir une compréhension du rôle des armateurs dans ce domaine. Les armateurs représentent Ie moyen par lequel le transport de marchandises par mer est possible. Leur rôle est d'une importance centrale. Par conséquent, le droit maritime se retrouve face à des questions complexes de responsabilités et d'indemnités. En particulier, la validité de l'insertion de clauses d'exonérations par les transporteurs pour se libérer d'une partie ou de toutes leurs responsabilités. A travers les années cette pratique a atteint un tel point d'injustice et de flagrant abus qu'il n'est plus possible d'ignorer Ie problème. L'industrie en crise se trouve obliger d'affronter ces questions et promouvoir Ie changement. En droit commun, l'armateur pouvait modifier son obligation prima facie autant qu'il le voulait. Au cours des ans, ces clauses d'exception augmentaient en nombre et en complexité au point qu'il devenait difficile de percevoir quel droit on pouvait avoir contre Ie transporteur. Les propriétaires de marchandise, exportateurs et importateurs de marchandises i.e. chargeurs, transporteurs, juristes et auteurs sont d'avis qu'il faut trouver une solution relative aux questions des clauses d'exonérations insérées dans les contrats de transport sous connaissement. Plus précisément ces clauses qui favorisent beaucoup plus les armateurs que les chargeurs. De plus, depuis longtemps la notion du fardeau de preuve était obscure. Il était primordial pour les pays de chargeurs d'atteindre une solution concernant cette question, citant qu'en pratique un fardeau très lourd leur était impose. Leur désir était de trouver une solution juste et équitable pour toutes les parties concernées, et non une solution favorisant les intérêts d’un coté seulement. Le transport par mer étant en grande partie international il était évident qu'une solution viable ne pouvait être laissée aux mains d'un pays. La solution idéale devait inclure toutes les parties concernées. Malgré le désir de trouver une solution globale, le consensus général fut long à atteindre. Le besoin urgent d'uniformité entre les pays donna naissance à plusieurs essais au niveau prive, national et international. Au cours des ans, on tint un grand nombre de conférences traitant des questions de responsabilités et d'indemnités des transporteurs maritimes. Aucun succès n'est atteint dans la poursuite de l'uniformité. Conséquemment, en 1893 les États Unis prennent la situation en mains pour régler le problème et adopte une loi nationale. Ainsi: «Les réactions sont venues des États Unis, pays de chargeurs qui supportent mal un système qui les désavantage au profit des armateurs traditionnels, anglais, norvégiens, grecs... Le Harter Act de 1893 établit un système transactionnel, mais impératif... »2 On constate qu'aux États Unis la question des clauses d'exonérations était enfin régie et par conséquent en grande partie leur application limitée. L'application du Harter Act n'étant pas au niveau international son degré de succès avait des limites. Sur Ie plan international la situation demeure la même et Ie besoin de trouver une solution acceptable pour tous persiste. Au début du vingtième siècle, I'utilisation des contrats de transport sous connaissement pour Ie transport de marchandise par mer est pratique courante. Au coeur du problème les contrats de transport sous connaissement dans lesquels les armateurs insèrent toutes sortes de clauses d'exonérations controversées. II devient évident qu'une solution au problème des clauses d'exonérations abusives tourne autour d'une règlementation de l'utilisation des contrats de transport sous connaissement. Ainsi, tout compromis qu'on peut envisager doit nécessairement régir la pratique des armateurs dans leurs utilisations des contrats de transport sous connaissement. Les années antérieures et postérieures à la première guerre mondiale furent marquées par I'utilisation croissante et injuste des contrats de transport sous connaissement. Le besoin de standardiser la pratique devenait alors pressant et les pays chargeurs s'impatientaient et réclamaient l'adoption d'une législation semblable au Harter Act des États Unis. Une chose était certaine, tous les intérêts en cause aspiraient au même objectif, atteindre une acceptation, certitude et unanimité dans les pratiques courantes et légales. Les Règles de la Haye furent la solution tant recherchée. Ils représentaient un nouveau régime pour gouverner les obligations et responsabilités des transporteurs. Leur but était de promouvoir un système bien balance entre les parties en cause. De plus elles visaient à partager équitablement la responsabilité entre transporteurs et chargeurs pour toute perte ou dommage causes aux biens transportes. Par conséquent, l'applicabilité des Règles de la Haye était limitée aux contrats de transport sous connaissement. Avec le temps on a reconnu aux Règles un caractère international et on a accepte leur place centrale sur Ie plan global en tant que base des relations entre chargeurs et transporteurs. Au départ, la réception du nouveau régime ne fut pas chaleureuse. La convention de la Haye de 1924 fut ainsi sujette à une opposition massive de la part des transporteurs maritimes, qui refusaient l'imposition d'un compromis affectant l'utilisation des clauses d'exonérations. Finalement Ie besoin d'uniformité sur Ie plan international stimula son adoption en grand nombre. Les règles de la Haye furent pour leur temps une vraie innovation une catalyse pour les reformes futures et un modèle de réussite globale. Pour la première fois dans 1'histoire du droit maritime une convention internationale régira et limitera les pratiques abusives des transporteurs maritimes. Les règles ne laissent pas place aux incertitudes ils stipulent clairement que les clauses d'exonération contraire aux règles de la Haye seront nulles et sans valeur. De plus les règles énoncent sans équivoque les droits, obligations et responsabilités des transporteurs. Néanmoins, Ie commerce maritime suivant son cours est marque par le modernisme de son temps. La pratique courante exige des reformes pour s'adapter aux changements de l'industrie mettant ainsi fin à la période d'harmonisation. Les règles de la Haye sous leur forme originale ne répondent plus aux besoins de l'industrie maritime. Par conséquent à la fin des années soixante on adopte les Règles de Visby. Malgré leur succès les règles n'ont pu échapper aux nombreuses critiques exprimant l'opinion, qu'elles étaient plutôt favorables aux intérêts des transporteurs et au détriment des chargeurs. Répondant aux pressions montantes on amende les Règles de la Haye, et Ie 23 février 1968 elles sont modifiées par Ie protocole de Visby. Essayant de complaire à l'insatisfaction des pays chargeurs, l'adoption des Règles de Visby est loin d'être une réussite. Leur adoption ne remplace pas le régime de la Haye mais simplement met en place un supplément pour combler les lacunes du système existant. Les changements qu'on retrouve dans Visby n'étant pas d'une grande envergure, la reforme fut critiquée par tous. Donnant naissance à des nouveaux débats et enfin à une nouvelle convention. Visby étant un échec, en 1978 la réponse arrive avec l'instauration d'un nouveau régime, différent de son prédécesseur (Hay/Haye-Visby). Les Règles de XI Hambourg sont Ie résultat de beaucoup d'efforts sur Ie plan international. Sous une pression croissante des pays chargeurs et plus particulièrement des pays en voie de développement la venue d'un nouveau régime était inévitables. Le bon fonctionnement de l'industrie et la satisfaction de toutes les parties intéressées nécessitaient un compromis qui répond aux intérêts de tous. Avec l'aide des Nations Unis et la participation de toutes les parties concernées les Règles de Hambourg furent adoptées. Accepter ce nouveau régime impliqua le début d'un nouveau système et la fin d'une époque centrée autour des règles de la Haye. II n'y a aucun doute que les nouvelles règles coupent les liens avec Ie passe et changent Ie système de responsabilité qui gouverne les transporteurs maritimes. L'article 4(2) de la Haye et sa liste d'exception est éliminé. Un demi-siècle de pratique est mis de coté, on tourne la page sur les expériences du passe et on se tourne vers une nouvelle future. Il est clair que les deux systèmes régissant Ie droit maritime visent Ie même but, une conformité internationale. Cette thèse traitera la notion de responsabilité, obligation et indemnisation des transporteurs maritimes sous les règles de la Haye et Hambourg. En particulier les difficultés face aux questions d'exonérations et d'indemnités. Chaque régime a une approche distincte pour résoudre les questions et les inquiétudes du domaine. D’un coté, la thèse démontrera les différentes facettes de chaque système, par la suite on mettra l'accent sur les points faibles et les points forts de chaque régime. Chaque pays fait face au dilemme de savoir quel régime devrait gouverner son transport maritime. La question primordiale est de savoir comment briser les liens du passe et laisser les Règles de la Haye dans leur place, comme prédécesseur et modèle pour Ie nouveau système. Il est sûr qu'un grand nombre de pays ne veulent pas se départir des règles de la Haye et continuent de les appliquer. Un grand nombre d'auteurs expriment leurs désaccords et indiquent qu'il serait regrettable de tourner le dos à tant d'années de travail. Pour se départir des Règles de la Haye, il serait une erreur ainsi qu'une perte de temps et d'argent. Pendant plus de 50 ans les cours à travers Ie monde ont réussi à instaurer une certaine certitude et harmonisation sur Ie plan juridique. Tout changer maintenant ne semble pas logique. Tout de même l'évident ne peut être ignorer, les Règles de la Haye ne répondent plus aux besoins du domaine maritime moderne. Les questions de responsabilité, immunité, fardeau de preuve et conflit juridictionnel demeurent floues. La législation internationale nécessite des reformes qui vont avec les changements qui marque l'évolution du domaine. Les précurseurs du changement décrivent les Règles de la Haye comme archaïques, injustes et non conforme au progrès. Elles sont connues comme Ie produit des pays industrialises sans l'accord ou la participation des pays chargeurs ou en voie de développement. Ainsi I'adoption des Règles de Hambourg signifie Ie remplacement du système précédent et non pas sa reforme. L'article 5(1) du nouveau système décrit un régime de responsabilité base sur la présomption de faute sans recours à une liste d'exonération, de plus les nouvelles règles étendent la période de responsabilité du transporteur. Les Règles de Hambourg ne sont peut être pas la solution idéale mais pour la première fois elle représente les intérêts de toutes les parties concernées et mieux encore un compromis accepte par tous. Cela dit, il est vrai que Ie futur prochain demeure incertain. II est clair que la plupart des pays ne sont pas presses de joindre ce nouveau régime aussi merveilleux soit-il. Le débat demeure ouvert Ie verdict délibère encore. Une chose demeure sure, l'analyse détaillée du fonctionnement de Hambourg avec ses défauts et mérites est loin d'être achevée. Seulement avec Ie recul on peut chanter les louanges, la réussite ou I'insuccès d'un nouveau système. Par conséquent, Ie nombre restreint des parties y adhérents rend l'analyse difficile et seulement théorique. Néanmoins il y'a de l'espoir qu'avec Ie temps l'objectif recherche sera atteint et qu'un commerce maritime régi par des règles et coutumes uniformes it. travers Ie globe sera pratique courante. Entre temps la réalité du domaine nous expose it. un monde divise et régi par deux systèmes.