889 resultados para Indians in literature
Resumo:
Objectives: To evaluate the prevalence of dental agenesis and its possible association with other developmental dental anomalies and systemic entities. Setting and Sample Population: Descriptive transversal study, for which 1518 clinical records, of patients visited by the Odontological Service of the Primary Health Centre of Cassà de la Selva (Girona-Spain) between December 2002 and February 2006 were reviewed. The data were recorded in relation to the oral and dental anomalies and the associated systemic entities, between the ones referred as concomitant in literature. Results: Values of 9.48% (7.25% excluding the third molars) for dental agenesis and 0.39% for oligodontia were obtained. The presence of dental agenesis concomitant with some other forms of oral and dental anomalies was observed. Attention must be drawn to the fact that a greater number of concomitant systemic entities were observed in those patients that presented a severe phenotypical pattern of dental agenesis. Conclusions: The results of the present study do not differ from the ones reported in studies of similar characteristics among Occidental and Spanish populations. The relationship observed between certain systemic entities and developmental dental anomalies suggest a possible common genetic etiology
Resumo:
Version abregée L'ischémie cérébrale est la troisième cause de mort dans les pays développés, et la maladie responsable des plus sérieux handicaps neurologiques. La compréhension des bases moléculaires et anatomiques de la récupération fonctionnelle après l'ischémie cérébrale est donc extrêmement importante et représente un domaine d'intérêt crucial pour la recherche fondamentale et clinique. Durant les deux dernières décennies, les chercheurs ont tenté de combattre les effets nocifs de l'ischémie cérébrale à l'aide de substances exogènes qui, bien que testées avec succès dans le domaine expérimental, ont montré un effet contradictoire dans l'application clinique. Une approche différente mais complémentaire est de stimuler des mécanismes intrinsèques de neuroprotection en utilisant le «modèle de préconditionnement» : une brève insulte protège contre des épisodes d'ischémie plus sévères à travers la stimulation de voies de signalisation endogènes qui augmentent la résistance à l'ischémie. Cette approche peut offrir des éléments importants pour clarifier les mécanismes endogènes de neuroprotection et fournir de nouvelles stratégies pour rendre les neurones et la glie plus résistants à l'attaque ischémique cérébrale. Dans un premier temps, nous avons donc étudié les mécanismes de neuroprotection intrinsèques stimulés par la thrombine, un neuroprotecteur «préconditionnant» dont on a montré, à l'aide de modèles expérimentaux in vitro et in vivo, qu'il réduit la mort neuronale. En appliquant une technique de microchirurgie pour induire une ischémie cérébrale transitoire chez la souris, nous avons montré que la thrombine peut stimuler les voies de signalisation intracellulaire médiées par MAPK et JNK par une approche moléculaire et l'analyse in vivo d'un inhibiteur spécifique de JNK (L JNK) .Nous avons également étudié l'impact de la thrombine sur la récupération fonctionnelle après une attaque et avons pu démontrer que ces mécanismes moléculaires peuvent améliorer la récupération motrice. La deuxième partie de cette étude des mécanismes de récupération après ischémie cérébrale est basée sur l'investigation des bases anatomiques de la plasticité des connections cérébrales, soit dans le modèle animal d'ischémie transitoire, soit chez l'homme. Selon des résultats précédemment publiés par divers groupes ,nous savons que des mécanismes de plasticité aboutissant à des degrés divers de récupération fonctionnelle sont mis enjeu après une lésion ischémique. Le résultat de cette réorganisation est une nouvelle architecture fonctionnelle et structurelle, qui varie individuellement selon l'anatomie de la lésion, l'âge du sujet et la chronicité de la lésion. Le succès de toute intervention thérapeutique dépendra donc de son interaction avec la nouvelle architecture anatomique. Pour cette raison, nous avons appliqué deux techniques de diffusion en résonance magnétique qui permettent de détecter les changements de microstructure cérébrale et de connexions anatomiques suite à une attaque : IRM par tenseur de diffusion (DT-IR1V) et IRM par spectre de diffusion (DSIRM). Grâce à la DT-IRM hautement sophistiquée, nous avons pu effectuer une étude de follow-up à long terme chez des souris ayant subi une ischémie cérébrale transitoire, qui a mis en évidence que les changements microstructurels dans l'infarctus ainsi que la modification des voies anatomiques sont corrélés à la récupération fonctionnelle. De plus, nous avons observé une réorganisation axonale dans des aires où l'on détecte une augmentation d'expression d'une protéine de plasticité exprimée dans le cône de croissance des axones (GAP-43). En appliquant la même technique, nous avons également effectué deux études, rétrospective et prospective, qui ont montré comment des paramètres obtenus avec DT-IRM peuvent monitorer la rapidité de récupération et mettre en évidence un changement structurel dans les voies impliquées dans les manifestations cliniques. Dans la dernière partie de ce travail, nous avons décrit la manière dont la DS-IRM peut être appliquée dans le domaine expérimental et clinique pour étudier la plasticité cérébrale après ischémie. Abstract Ischemic stroke is the third leading cause of death in developed countries and the disease responsible for the most serious long-term neurological disability. Understanding molecular and anatomical basis of stroke recovery is, therefore, extremely important and represents a major field of interest for basic and clinical research. Over the past 2 decades, much attention has focused on counteracting noxious effect of the ischemic insult with exogenous substances (oxygen radical scavengers, AMPA and NMDA receptor antagonists, MMP inhibitors etc) which were successfully tested in the experimental field -but which turned out to have controversial effects in clinical trials. A different but complementary approach to address ischemia pathophysiology and treatment options is to stimulate and investigate intrinsic mechanisms of neuroprotection using the "preconditioning effect": applying a brief insult protects against subsequent prolonged and detrimental ischemic episodes, by up-regulating powerful endogenous pathways that increase resistance to injury. We believe that this approach might offer an important insight into the molecular mechanisms responsible for endogenous neuroprotection. In addition, results from preconditioning model experiment may provide new strategies for making brain cells "naturally" more resistant to ischemic injury and accelerate their rate of functional recovery. In the first part of this work, we investigated down-stream mechanisms of neuroprotection induced by thrombin, a well known neuroprotectant which has been demonstrated to reduce stroke-induced cell death in vitro and in vivo experimental models. Using microsurgery to induce transient brain ischemia in mice, we showed that thrombin can stimulate both MAPK and JNK intracellular pathways through a molecular biology approach and an in vivo analysis of a specific kinase inhibitor (L JNK1). We also studied thrombin's impact on functional recovery demonstrating that these molecular mechanisms could enhance post-stroke motor outcome. The second part of this study is based on investigating the anatomical basis underlying connectivity remodeling, leading to functional improvement after stroke. To do this, we used both a mouse model of experimental ischemia and human subjects with stroke. It is known from previous data published in literature, that the brain adapts to damage in a way that attempts to preserve motor function. The result of this reorganization is a new functional and structural architecture, which will vary from patient to patient depending on the anatomy of the damage, the biological age of the patient and the chronicity of the lesion. The success of any given therapeutic intervention will depend on how well it interacts with this new architecture. For this reason, we applied diffusion magnetic resonance techniques able to detect micro-structural and connectivity changes following an ischemic lesion: diffusion tensor MRI (DT-MRI) and diffusion spectrum MRI (DS-MRI). Using DT-MRI, we performed along-term follow up study of stroke mice which showed how diffusion changes in the stroke region and fiber tract remodeling is correlating with stroke recovery. In addition, axonal reorganization is shown in areas of increased plasticity related protein expression (GAP 43, growth axonal cone related protein). Applying the same technique, we then performed a retrospective and a prospective study in humans demonstrating how specific DTI parameters could help to monitor the speed of recovery and show longitudinal changes in damaged tracts involved in clinical symptoms. Finally, in the last part of this study we showed how DS-MRI could be applied both to experimental and human stroke and which perspectives it can open to further investigate post stroke plasticity.
Resumo:
Ohjelmistokehitys on monimutkainen prosessi. Yksi keskeisistä tekijöistä siinä on ohjelmistolle asetettavat vaatimukset. Näitä vaatimuksia on hyvin monenlaisia, ja eri tasoisia; toivotusta toiminnallisuudesta hyvinkin yksityiskohtaisiin vaatimuksiin. Näiden vaatimusten hallinta on myöskin hyvin monitahoista, vaikkakin se on kirjallisuudessa esitetty selkeänä prosessissa, joka on sarja toisistaan erottuviavaiheita. Työn painopiste oli näiden vaatimusten muutoksen ja valmiiseen ohjelmistoon kohdistuvan palautteen hallinnassa, ja kuinka vaatimustenhallintaohjelmisto voisi olla avuksi näissä prosesseissa. Vaatimustenhallintatyökalun käyttö ei sinällään ratkaise mitään ongelmia, mutta se suo puitteet parantaa vaatimusten hallitsemista. Työkalun käytöstä on muun muassa seuraavia etuja: vaatimusten keskitetty varastointi, käyttäjäoikeuksien määrittely koskien eri käyttäjiä ja heidän pääsyään näkemään tai muuttamaan tietoa, muutoksenhallintaprosessin hallinta, muutosten vaikutuksen analysointi ja jäljitettävyys ja pääsy tietoihin web-selaimella.
Resumo:
Työn tavoitteena oli selvittää Larox Oy:n, Lappeenranta, palveluorganisaation prosessit ja prosessien väliset rajapinnat. Prosesseja ja prosessien kehittämistä ja innovointia tarkasteltiin ensin kirjallisuuden perusteella. Rajapintojen selkeää esittämistä varten kehitettiin yksinkertainen metodologia mind mapping -tekniikan pohjalta. Nykyisten prosessien tila ja rajapinnat analysoitiin ja dokumentoitiin haastattelemalla Larox Oy:n työntekijöitä ja asiakkaita sekä tutustumalla prosessikuvauksiin ja muihin olennaisiin dokumentteihin. Analyysin tulosten perusteella tunnistettiin suurimmat ongelmakohdat rajapinnoissa ja pohdittiin mahdollisia ratkaisuja niihin. Pieniä prosessinkehitysaloitteita kehitettiin yhteistyössä Larox Oy:n työntekijöiden kanssa. Työn lopussa on pohdittu mahdollisia tulevaisuuden malleja Larox Oy:n palveluorganisaation toimintamalleiksi.
Resumo:
Tutkimusongelmana oli kuinka tiedon johtamisella voidaan edesauttaa tuotekehitysprosessia. Mitkä ovat ne avaintekijät tietoympäristössä kuin myös itse tiedossa, joilla on merkitystä erityisesti tuotekehitysprosessin arvon muodostumiseen ja prosessien kehittämiseen? Tutkimus on laadullinen Case-tutkimus. Tutkimusongelmat on ensin selvitetty kirjallisuuden avulla, jonka jälkeen teoreettinen viitekehys on rakennettu tutkimaan rajattua ongelma-aluetta case-yrityksestä. Empiirisen tutkimuksen materiaali koostuu pääasiallisesti henkilökohtaisten teemahaastattelujen aineistosta. Tulokset merkittävimmistä tiedon hyväksikäytön haittatekijöistä, kuten myös parannusehdotukset on lajiteltu teoreettisessa viitekehyksessä esitettyjen oletustekijöiden mukaan. Haastatteluissa saadut vastaukset tukevat kirjallisuudesta ja alan ammattilaiselta saatua käsitystä tärkeimmistä vaikuttavista tekijöistä. Tärkeimmät toimenpiteet ja aloitteet joilla parannettaisiin tiedon muodostumista, koskivat ennnen kaikkea työnteon ulkoisia olosuhteita, eikä niinkään tiedon muodostumisen prosessia itseään. Merkittävimpiä haittatekijöitä olivat kultturiin, fyysiseen ja henkiseen tilaan ja henkilöstöresursseihin liittyvät ongelmat. Ratkaisuja ongelmiin odotettiin saatavan lähinnä tietotekniikan, henkilöstöresurssien ja itse tiedon muokkaamisen avulla. Tuotekehitysprosessin ydin tietovirtojen ja –pääomien luokittelu ja tulkitseminen tiedon muodostusta kuvaavan Learning Spiralin avulla antoi lähinnä teoreettisia viitteitä siitä millaisia keinoja on olemassa tiedon lisäämiseen ja jakamiseen eri tietotyypeittäin. Tulosten perusteella caseyrityksessä pitäisi kiinnittää erityistä huomiota tiedon dokumentointiin ja jakamiseen erityisesti sen tiedon osalta, joka on organisaatiossa vain harvalla ja/tai luonteeltaan hyvin tacitia.
Resumo:
The objective of the thesis was to explore the nature and characteristics of customer-related internal communication in a global industrial matrix organization during a specific customer relationship, and how it could be improved. The theoretical part of the study views the field of the concepts of intra-organizational information and knowledge sharing. The theoretical part also views the internal communications influences to customer relationships, its problematic, and the suggestions to improve internal communication in literature. The empirical part of the study was conducted with the Content Analysis and the Social Network Analysis as research methods. The data was collected by interviews and a questionnaire. Internal communication was observed first generally within the organization from the point of view of a certain business, and secondly, during a specific customer relationship at personal level and at departmental level. The results of the study describe the nature and characteristics of internal communication in the organization. The results give 13 suggestions for improving internal communication in the organization. Although the study has been done in one specific organization, it also offers insights for other organizations as well as managers to improve their internal communication.
Resumo:
This thesis gathers knowledge about ongoing high-temperature reactor projects around the world. Methods for calculating coolant flow and heat transfer inside a pebble-bed reactor core are also developed. The thesis begins with the introduction of high-temperature reactors including the current state of the technology. Process heat applications that could use the heat from a high-temperature reactor are also introduced. A suitable reactor design with data available in literature is selected for the calculation part of the thesis. Commercial computational fluid dynamics software Fluent is used for the calculations. The pebble-bed is approximated as a packed-bed, which causes sink terms to the momentum equations of the gas flowing through it. A position dependent value is used for the packing fraction. Two different models are used to calculate heat transfer. First a local thermal equilibrium is assumed between the gas and solid phases and a single energy equation is used. In the second approach, separate energy equations are used for the phases. Information about steady state flow behavior, pressure loss, and temperature distribution in the core is obtained as results of the calculations. The effect of inlet mass flow rate to pressure loss is also investigated. Data found in literature and the results correspond each other quite well, considered the amount of simplifications in the calculations. The models developed in this thesis can be used to solve coolant flow and heat transfer in a pebble-bed reactor, although additional development and model validation is needed for better accuracy and reliability.
Resumo:
This paper is a literature review which describes the construction of state of the art of permanent magnet generators and motors constructing and discusses the current and possible application of these machines in industry. Permanent magnet machines are a well-know class of rotating and linear electric machines used for many years in industrial applications. A particular interest for permanent magnet generators is connected with wind mills, which seem to be becoming increasingly popular nowadays. Geared and direct-driven permanent magnet generators are described. A classification of direct-driven permanent magnet generators is given. Design aspects of permanent magnet generators are presented. Permanent magnet generators for wind turbines designs are highlighted. Dynamics and vibration problems of permanent magnet generators covered in literature are presented. The application of the Finite Element Method for mechanical problems solution in the field of permanent magnet generators is discussed.
Resumo:
Nanoparticles offer adjustable and expandable reactive surface area compared to the more traditional solid phase forms utilized in bioaffinity assays due to the high surface to-volume ratio. The versatility of nanoparticles is further improved by the ability to incorporate various molecular complexes such as luminophores into the core. Nanoparticle labels composed of polystyrene, silica, inorganic crystals doped with high number of luminophores, preferably lanthanide(III) complexes, are employed in bioaffinity assays. Other label species such as semiconductor crystals (quantum dots) or colloidal gold clusters are also utilized. The surface derivatization of such particles with biomolecules is crucial for the applicability to bioaffinity assays. The effectiveness of a coating is reliant on the biomolecule and particle surface characteristics and the selected coupling technique. The most critical aspects of the particle labels in bioaffinity assays are their size-dependent features. For polystyrene, silica and inorganic phosphor particles, these include the kinetics, specific activity and colloidal stability. For quantum dots and gold colloids, the spectral properties are also dependent on particle size. This study reports the utilization of europium(III)-chelate-embedded nanoparticle labels in the development of bioaffinity assays. The experimental covers both the heterogeneous and homogeneous assay formats elucidating the wide applicability of the nanoparticles. It was revealed that the employment of europium(III) nanoparticles in heterogeneous assays for viral antigens, adenovirus hexon and hepatitis B surface antigen (HBsAg), resulted in sensitivity improvement of 10-1000 fold compared to the reference methods. This improvement was attributed to the extreme specific activity and enhanced monovalent affinity of the nanoparticles conjugates. The applicability of europium(III)-chelate-doped nanoparticles to homogeneous assay formats were proved in two completely different experimental settings; assays based on immunological recognition or proteolytic activity. It was shown that in addition to small molecule acceptors, particulate acceptors may also be employed due to the high specific activity of the particles promoting proximity-induced reabsorptive energy transfer in addition to non-radiative energy transfer. The principle of proteolytic activity assay relied on a novel dual-step FRET concept, wherein the streptavidin-derivatized europium(III)-chelate-doped nanoparticles were used as donors for peptide substrates modified with biotin and terminal europium emission compliant primary acceptor and a secondary quencher acceptor. The recorded sensitized emission was proportional to the enzyme activity, and the assay response to various inhibitor doses was in agreement with those found in literature showing the feasibility of the technique. Experiments regarding the impact of donor particle size on the extent of direct donor fluorescence and reabsorptive excitation interference in a FRET-based application was conducted with differently sized europium(III)-chelate-doped nanoparticles. It was shown that the size effect was minimal
Resumo:
The aim of the work is to study the existing analytical calculation procedures found in literature to calculate the eddy-current losses in surface mounted permanent magnets within PMSM application. The most promising algorithms are implemented with MATLAB software under the dimensional data of LUT prototype machine. In addition finite elements analyze, utilized with help of Flux 2D software from Cedrat Ltd, is applied to calculate the eddy-current losses in permanent magnets. The results obtained from analytical methods are compared with numerical results.
Resumo:
Validation and verification operations encounter various challenges in product development process. Requirements for increasing the development cycle pace set new requests for component development process. Verification and validation usually represent the largest activities, up to 40 50 % of R&D resources utilized. This research studies validation and verification as part of case company's component development process. The target is to define framework that can be used in improvement of the validation and verification capability evaluation and development in display module development projects. Validation and verification definition and background is studied in this research. Additionally, theories such as project management, system, organisational learning and causality is studied. Framework and key findings of this research are presented. Feedback system according of the framework is defined and implemented to the case company. This research is divided to the theory and empirical parts. Theory part is conducted in literature review. Empirical part is done in case study. Constructive methode and design research methode are used in this research A framework for capability evaluation and development was defined and developed as result of this research. Key findings of this study were that double loop learning approach with validation and verification V+ model enables defining a feedback reporting solution. Additional results, some minor changes in validation and verification process were proposed. There are a few concerns expressed on the results on validity and reliability of this study. The most important one was the selected research method and the selected model itself. The final state can be normative, the researcher may set study results before the actual study and in the initial state, the researcher may describe expectations for the study. Finally reliability of this study, and validity of this work are studied.
Resumo:
The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.
Resumo:
Specific combustion programs (Gaseq, Chemical equilibria in perfect gases, Chris Morley) are used to model dioxin and formation in the incineration processes of urban solid wastes. Thanks to these programs, it is possible to establish correlations with the formation mechanisms postulated in literature on the subject. It was found that minimum oxygen quantities are required to obtain a significant formation of these compounds and that more furans than dioxins are formed. Likewise, dioxin and furan formation is related to the presence of carbon monoxide, and dioxin and furan distribution among its different compounds depends on the chlorine and hydrogen relative composition. This is due to the fact that an increased chlorine availability leads to the formation of compounds bearing a higher chlorine concentration (penta-, hexa-, hepta-, and octachlorides), whereas an increased hydrogen availability leads to the formation of compounds bearing a lower chlorine number (mono, di-, tri-, and tetrachlorides).
Resumo:
Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.