944 resultados para Hazard-Based Models
Resumo:
Syttymistä ja palamisen etenemistä partikkelikerroksessa tutkitaan paloturvallisuuden parantamista sekä kiinteitä polttoaineita käyttävien polttolaitteiden toiminnan tuntemista ja kehittämistä varten. Tässä tutkimuksessa on tavoitteena kerätä yhteen syttymiseen ja liekkirintaman etenemiseen liittyviä kokeellisia ja teoreettisia tutkimustuloksia, jotka auttavat kiinteäkerrospoltto- ja -kaasutus-laitteiden kehittämisessä ja suunnittelussa. Työ on esitutkimus sitä seuraavalle kokeelliselle ja teoreettiselle osalle. Käsittelyssä keskitytään erityisesti puuperäisiin polttoaineisiin. Hiilidioksidipäästöjen vähentämistavoitteet sekä kiinteiden jätteiden energiakäytön lisääminen ja kaatopaikalle viennin vähentäminen aiheuttavat lähitulevaisuudessa kerrospolton lisääntymistä. Kuljetusmatkojen optimoinnin takia joudutaan rakentamaan melko pieniä polttolaitoksia, joissa kerrospolttotekniikka on edullisin vaihtoehto. Syttymispisteellä tarkoitetaan Semenovin määritelmän mukaan tilaa ja ajankohtaa, jolloin polttoaineen ja hapen reaktioissa muodostuva nettoenergia aikayksikössä on yhtäsuuri kuin ympäristöön siirtyvä nettoenergiavirta. Itsesyttyminen tarkoittaa syttymistä ympäristön lämpötilan tai paineen suurenemisen seurauksena. Pakotettu syttyminen tapahtuu, kun syttymispisteen läheisyydessä on esimerkiksi liekki tai hehkuva kiinteä kappale, joka aiheuttaa paikallisen syttymisen ja syttymisrintaman leviämisen muualle polttoaineeseen. Kokeellinen tutkimus on osoittanut tärkeimmiksi syttymiseen ja syttymisrintaman etenemiseen vaikuttaviksi tekijöiksi polttoaineen kosteuden, haihtuvien aineiden pitoisuuden ja lämpöarvon, partikkelikerroksen huokoisuuden, partikkelien koon ja muodon, polttoaineen pinnalle tulevan säteilylämpövirran tiheyden, kaasun virtausnopeuden kerroksessa, hapen osuuden ympäristössä sekä palamisilman esilämmityksen. Kosteuden lisääntyminen suurentaa syttymisenergiaa ja -lämpötilaa sekä pidentää syttymisaikaa. Mitä enemmän polttoaine sisältää haihtuvia aineita sitä pienemmässä lämpötilassa se syttyy. Syttyminen ja syttymisrintaman eteneminen ovat sitä nopeampia mitä suurempi on polttoaineen lämpöarvo. Kerroksen huokoisuuden kasvun on havaittu suurentavan palamisen etenemisnopeutta. Pienet partikkelit syttyvät yleensä nopeammin ja pienemmässä lämpötilassa kuin suuret. Syttymisrintaman eteneminen nopeutuu partikkelien pinta-ala - tilavuussuhteen kasvaessa. Säteilylämpövirran tiheys on useissa polttosovellutuksissa merkittävin lämmönsiirtotekijä, jonka kasvu luonnollisesti nopeuttaa syttymistä. Ilman ja palamiskaasujen virtausnopeus kerroksessa vaikuttaa konvektiiviseen lämmönsiirtoon ja hapen pitoisuuteen syttymisvyöhykkeellä. Ilmavirtaus voi jäähdyttää ja kuumankaasun virtaus lämmittää kerrosta. Hapen osuuden kasvaminen nopeuttaa syttymistä ja liekkirintaman etenemistä kunnes saavutetaan tila, jota suuremmilla virtauksilla ilma jäähdyttää ja laimentaa reaktiovyöhykettä. Palamisilman esilämmitys nopeuttaa syttymisrintaman etenemistä. Syttymistä ja liekkirintaman etenemistä kuvataan yleensä empiirisillä tai säilyvyysyhtälöihin perustuvilla malleilla. Empiiriset mallit perustuvat mittaustuloksista tehtyihin korrelaatioihin sekä joihinkin tunnettuihin fysikaalisiin lainalaisuuksiin. Säilyvyysyhtälöihin perustuvissa malleissa systeemille määritetään massan, energian, liikemäärän ja alkuaineiden säilymisyhtälöt, joiden nopeutta kuvaavien siirtoyhtälöiden muodostamiseen käytetään teoreettisella ja kokeellisella tutkimuksella saatuja yhtälöitä. Nämä mallinnusluokat ovat osittain päällekkäisiä. Pintojen syttymistä kuvataan usein säilyvyysyhtälöihin perustuvilla malleilla. Partikkelikerrosten mallinnuksessa tukeudutaan enimmäkseen empiirisiin yhtälöihin. Partikkelikerroksia kuvaavista malleista Xien ja Liangin hiilipartikkelikerroksen syttymiseen liittyvä tutkimus ja Gortin puun ja jätteen polttoon liittyvä reaktiorintaman etenemistutkimus ovat lähimpänä säilyvyysyhtälöihin perustuvaa mallintamista. Kaikissa malleissa joudutaan kuitenkin yksinkertaistamaan todellista tapausta esimerkiksi vähentämällä dimensioita, reaktioita ja yhdisteitä sekä eliminoimalla vähemmän merkittävät siirtomekanismit. Suoraan kerrospolttoa ja -kaasutusta palvelevia syttymisen ja palamisen etenemisen tutkimuksia on vähän. Muita tarkoituksia varten tehtyjen tutkimusten polttoaineet, kerrokset ja ympäristöolosuhteet poikkeavat yleensä selvästi polttolaitteiden vastaavista olosuhteista. Erikokoisten polttoainepartikkelien ja ominaisuuksiltaan erilaisten polttoaineiden seospolttoa ei ole tutkittu juuri ollenkaan. Polttoainepartikkelien muodon vaikutuksesta on vain vähän tutkimusta.Ilman kanavoitumisen vaikutuksista ei löytynyt tutkimuksia.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
Object. The aim of this study was to identify patients who are likely to benefit from surgery for unruptured brain arterriovenous malformations (ubAVMs). Methods. The authors' database was interrogated for the risk and outcome of hemorrhage after referral and the out- come from surgery. Furthermore, the outcome from surgery incorporated those cases excluded from surgery because of perceived greater risk (sensitivity analysis). Finally, a comparison was made for the authors' patients between the natural history and surgery. Data were collected for 427 consecutively enrolled patients with ubAVMs in a database that in- cluded patients who were conservatively managed. Kaplan-Meier analysis was performed on patients observed for more than 1 day to determine the risk of hemorrhage. Variables that may influence the risk of first hemorrhage were assessed using Cox proportional hazard regression models and Kaplan-Meier life table analyses from referral until the first occur- rence of the following: hemorrhage, treatment, or last review. The outcome from surgery (leading to a new permanent neurological deficit with last review modified Rankin Scale [mRS] score > 1) was determined. Further sensitivity analy- sis was made to predict risk from surgery for the total ubAVM cohort by incorporating outcomes of surgical cases as well as cases excluded from surgery because of perceived risk, and assuming an adverse outcome for these excluded cases. Results. A total of 377 patients with a ubAVM were included in the analysis of the risk of hemorrhage. The 5-year risk of hemorrhage for ubAVM was 11.5%. Hemorrhage resulted in an mRS score > 1 in 14 cases (88% [95% CI 63%-98%]). Patients with Spetzler-Ponce Class A ubAVMs treated by surgery (n = 190) had a risk from surgery of 1.6% (95% CI 0.3%-4.8%) for a permanent neurological deficit leading to an mRS score > 1 and 0.5% (95% CI< 0.1%-3.2%) for a permanent neurological deficit leading to an mRS score > 2. Patients with Spetzler-Ponce Class B ubAVMs treated by surgery (n = 107) had a risk from surgery of 14.0% (95% CI 8.6%-22.0%) for a permanent neurological deficit leading to an mRS score > 1. Sensitivity analysis of Spetzler-Ponce Class B ubAVMs, including those in patients excluded from surgery, showed that the true risk for surgically eligible patients may have been as high as 15.6% (95% CI 9.9%-23.7%) for mRS score > 1, had all patients who were perceived to have a greater risk experienced an adverse outcome. Patients with Spetzler-Ponce Class C ubAVMs treated by surgery (n = 44) had a risk from surgery of 38.6% (95% CI 25.7%-53.4%) for a permanent neurological deficit leading to an mRS score >1. Sensitivity analysis of Class C ubAVMs, including those harbored by patients excluded from surgery, showed that the true risk for surgically eligible patients may have been as high as 60.9% (95% CI 49.2%-71.5%) for mRS score > 1, had all patients who were perceived to have a greater risk experienced an adverse outcome. Conclusion. Surgical outcomes for Spetzler-Ponce Class ubAVMs are better than those for conservative management.
Resumo:
This thesis focuses on the social-psychological factors that help coping with structural disadvantage, and specifically on the role of cohesive ingroups and the sense of connectedness and efficacy they entail in this process. It aims to complement existing group-based models of coping that are grounded in a categorization perspective to groups and consequently focus exclusively on the large-scale categories made salient in intergroup contexts of comparisons. The dissertation accomplishes this aim through a reconsideration of between-persons relational interdependence as a sufficient and independent antecedent of a sense of groupness, and the benefits that a sense of group connectedness in one's direct environment, regardless of the categorical or relational basis of groupness, might have in the everyday struggles of disadvantaged group members. The three empirical papers aim to validate this approach, outlined in the first theoretical introduction, by testing derived hypotheses. They are based on data collected with youth populations (15-30) from three institutions in French-speaking Switzerland within the context of a larger project on youth transitions. Methods of data collection are paper-pencil questionnaires and in-depth interviews with a selected sub-sample of participants. The key argument of the first paper is that members of socially disadvantaged categories face higher barriers to their life project and that a general sense of connectedness, either based on categorical identities or other proximal groups and relations, mitigates the feeling of powerlessness associated with this experience. The second paper develops and tests a model that defines individual needs satisfaction as antecedent of self-group bonds and the efficacy beliefs derived from these intragroup bonds as the mechanism underlining the role of ingroups in coping. The third paper highlights the complexities that might be associated with the construction of a sense of groupness directly from intergroup comparisons and categorization-based disadvantage, and points out a more subtle understanding of the processes underling the emergence of groupness out of the situation of structural disadvantage. Overall, the findings confirm the central role of ingroups in coping with structural disadvantage and the importance of an understanding of groupness and its role that goes beyond the dominant focus on intergroup contexts and categorization processes.
Resumo:
Käyttöliittymä on rajapinta käyttäjän ja järjestelmän tarjoamien toimintojen välillä ja sen toimivuus vaikuttaa toimintojen suorittamiseen joko positiivisesti tai negatiivisesti. Täten sovelluksen suunnitteluvaiheessa on hyvä arvioida käyttöliittymän ja sen toimintojen laatua ja kokeilla ideoiden toimivuutta rakentamalla asiasta prototyyppejä. Prototypoinnilla voidaan tunnistaa ja korjata mahdolliset ongelmat jo suunnittelupöydällä. Tämä diplomityö käsittelee Web-sovelluksen kehityksen aikana toteutettua käyttöliittymän ja sen toimintojen prototypointia. Käyttöliittymien mallintamista voidaan toteuttaa erilaisilla menetelmillä, joita työssä käydään läpi teknologisista näkökulmista eli miten prototypointimenetelmiä voidaan soveltaa projektin eri vaiheissa. Prototypoinnin apuna käytettäviin työkaluihin luodaan lyhyt katsaus esitellen yleisellä tasolla muutamia eri sovelluskategorian ohjelmistoja ja lisäksi käsitellään suunnittelumallien hyödyntämistä. Työ osoittaa, että yleisiä prototypointimenetelmiä ja -periaatteita voidaan soveltaa Web-sovellusten prototypoinnissa. Prototypointi on hyödyllistä aloittaa luonnostelemalla ja jatkaa aikaisessa vaiheessa HTML-malleihin, joilla päästään lähelle toteutuksen teknologioita ja mallintamaan sovelluksen luonnetta, ilmettä, tuntumaa ja vuorovaikutusta. HTML-prototyypeistä voidaan jalostaa sekoitetun tarkkuuden malleja ja ne toimivat toteutuksen perustana. Jatkokehityksessä ideoita voidaan esittää useilla eri tarkkuuden tekniikoilla.
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
Työn tavoitteena oli vastata ensisijaisesti kysymykseen, voidaanko projektiliiketoiminnan kassavirtoja ennustaa 3-15 kuukauden aikavälillä ja jos voidaan, niin miten ja millä tarkkuudella. Tutkimus toteutettiin teoriatutkimuksena aihepiiristä ja tutkimuksen pohjalta luotiin malli kassavirtojen ennustamiseen kohdeyritykselle 3-15 kuukauden aikavälille. Mallin laatimiseksi oli hyödynnettävissä viiden vuoden aineistot kohdeyrityksen kassavirroista, budjetista ja liiketoiminnan toteumatiedoista. Työn teoriaosiossa tutkittiin kirjallisuuden pohjalta projektiliiketoimintaa, budjetointia sekä kassavirtoja ja niiden ennustamista. Tämän jälkeen teorian pohjalta rakennettiin kohdeyritykselle historiatietoihin perustuva malli kassavirtojen ennustamiseksi. Mallia rakennettaessa määritettiin ensimmäiseksi merkittävimmät kassavirran komponentit, minkä jälkeen niille laadittiin ennustemenetelmät. Samalla arvioitiin millä tarkkuudella projektilähtöisen liiketoiminnan kassavirtoja pystytään ennustamaan. Tutkimuksen tuloksena oli historiatietoihin pohjautuva ennustemalli kohdeyritykselle. Mallilla tehtyjen testien pohjalta voitiin todeta, että projektilähtöisen liiketoiminnan kassavirtoja pystytään ennustamaan melko hyvällä tarkkuudella, ennustaminen ei kuitenkaan ole niin luotettavaa, kuin jos ennustettaisiin tasaisemmin kehittyvän liiketoiminnan kassavirtoja. Historiaan pohjautuvaa mallia käytettäessä pitää myös muistaa, että mikään ei takaa historian toistumista tulevaisuudessa.
Resumo:
This thesis studies the development of service offering model that creates added-value for customers in the field of logistics services. The study focusses on offering classification and structures of model. The purpose of model is to provide value-added solutions for customers and enable superior service experience. The aim of thesis is to define what customers expect from logistics solution provider and what value customers appreciate so greatly that they could invest in value-added services. Value propositions, costs structures of offerings and appropriate pricing methods are studied. First, literature review of creating solution business model and customer value is conducted. Customer value is found out with customer interviews and qualitative empiric data is used. To exploit expertise knowledge of logistics, innovation workshop tool is utilized. Customers and experts are involved in the design process of model. As a result of thesis, three-level value-added service offering model is created based on empiric and theoretical data. Offerings with value propositions are proposed and the level of model reflects the deepness of customer-provider relationship and the amount of added value. Performance efficiency improvements and cost savings create the most added value for customers. Value-based pricing methods, such as performance-based models are suggested to apply. Results indicate the interest of benefitting networks and partnership in field of logistics services. Networks development is proposed to be investigated further.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
Basic relationships between certain regions of space are formulated in natural language in everyday situations. For example, a customer specifies the outline of his future home to the architect by indicating which rooms should be close to each other. Qualitative spatial reasoning as an area of artificial intelligence tries to develop a theory of space based on similar notions. In formal ontology and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. We shall introduce abstract relation algebras and present their structural properties as well as their connection to algebras of binary relations. This will be followed by details of the expressiveness of algebras of relations for region based models. Mereotopology has been the main basis for most region based theories of space. Since its earliest inception many theories have been proposed for mereotopology in artificial intelligence among which Region Connection Calculus is most prominent. The expressiveness of the region connection calculus in relational logic is far greater than its original eight base relations might suggest. In the thesis we formulate ways to automatically generate representable relation algebras using spatial data based on region connection calculus. The generation of new algebras is a two pronged approach involving splitting of existing relations to form new algebras and refinement of such newly generated algebras. We present an implementation of a system for automating aforementioned steps and provide an effective and convenient interface to define new spatial relations and generate representable relational algebras.
Resumo:
This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.
Resumo:
Introduction: La démence peut être causée par la maladie d’Alzheimer (MA), la maladie cérébrovasculaire (MCEREV), ou une combinaison des deux. Lorsque la maladie cérébrovasculaire est associée à la démence, les chances de survie sont considérées réduites. Il reste à démontrer si le traitement avec des inhibiteurs de la cholinestérase (ChEIs), qui améliore les symptômes cognitifs et la fonction globale chez les patients atteints de la MA, agit aussi sur les formes vasculaires de démence. Objectifs: La présente étude a été conçue pour déterminer si la coexistence d’une MCEREV était associée avec les chances de survie ou la durée de la période jusqu’au placement en hebergement chez les patients atteints de la MA et traités avec des ChEIs. Des études montrant de moins bons résultats chez les patients souffrant de MCEREV que chez ceux n’en souffrant pas pourrait militer contre l’utilisation des ChEIs chez les patients atteints à la fois de la MA et la MCEREV. L'objectif d'une seconde analyse était d'évaluer pour la première fois chez les patients atteints de la MA l'impact potentiel du biais de « temps-immortel » (et de suivi) sur ces résultats (mort ou placement en hebergement). Méthodes: Une étude de cohorte rétrospective a été conduite en utilisant les bases de données de la Régie de l’Assurance Maladie du Québec (RAMQ) pour examiner la durée de la période jusqu’au placement en hebergement ou jusqu’au v décès des patients atteints de la MA, âgés de 66 ans et plus, avec ou sans MCEREV, et traités avec des ChEIs entre le 1er Juillet 2000 et le 30 Juin 2003. Puisque les ChEIs sont uniquement indiquées pour la MA au Canada, chaque prescription de ChEIs a été considérée comme un diagnostic de la MA. La MCEREV concomitante a été identifié sur la base d'un diagnostic à vie d’un accident vasculaire cérébral (AVC) ou d’une endartériectomie, ou d’un diagnostic d'un accident ischémique transitoire au cours des six mois précédant la date d’entrée. Des analyses séparées ont été conduites pour les patients utilisant les ChEIs de façon persistante et pour ceux ayant interrompu la thérapie. Sept modèles de régression à risque proportionnel de Cox qui ont varié par rapport à la définition de la date d’entrée (début du suivi) et à la durée du suivi ont été utilisés pour évaluer l'impact du biais de temps-immortel. Résultats: 4,428 patients ont répondu aux critères d’inclusion pour la MA avec MCEREV; le groupe de patients souffrant seulement de la MA comptait 13,512 individus. Pour le critère d’évaluation composite considérant la durée de la période jusqu’au placement en hebergement ou jusqu’au décès, les taux de survie à 1,000 jours étaient plus faibles parmi les patients atteints de la MA avec MCEREV que parmi ceux atteints seulement de la MA (p<0.01), mais les différences absolues étaient très faibles (84% vs. 86% pour l’utilisation continue de ChEIs ; 77% vs. 78% pour la thérapie avec ChEIs interrompue). Pour les critères d’évaluation secondaires, la période jusqu’au décès était plus courte chez les patients avec la MCEREV que sans la MCEREV, mais la période jusqu’au vi placement en hebergement n’était pas différente entre les deux groupes. Dans l'analyse primaire (non-biaisée), aucune association a été trouvée entre le type de ChEI et la mort ou le placement en maison d'hébergement. Cependant, après l'introduction du biais de temps-immortel, on a observé un fort effet différentiel. Limitations: Les résultats peuvent avoir été affectés par le biais de sélection (classification impropre), par les différences entre les groupes en termes de consommation de tabac et d’indice de masse corporelle (ces informations n’étaient pas disponibles dans les bases de données de la RAMQ) et de durée de la thérapie avec les ChEIs. Conclusions: Les associations entre la coexistence d’une MCEREV et la durée de la période jusqu’au placement en hebergement ou au décès apparaissent peu pertinentes cliniquement parmi les patients atteints de la MA traités avec des ChEIs. L’absence de différence entre les patients atteints de la MA souffrant ou non de la MCEREV suggère que la coexistence d’une MCEREV ne devrait pas être une raison de refuser aux patients atteints de la MA l’accès au traitement avec des ChEIs. Le calcul des « personne-temps » non exposés dans l'analyse élimine les estimations biaisées de l'efficacité des médicaments.
Resumo:
Les systèmes statistiques de traduction automatique ont pour tâche la traduction d’une langue source vers une langue cible. Dans la plupart des systèmes de traduction de référence, l'unité de base considérée dans l'analyse textuelle est la forme telle qu’observée dans un texte. Une telle conception permet d’obtenir une bonne performance quand il s'agit de traduire entre deux langues morphologiquement pauvres. Toutefois, ceci n'est plus vrai lorsqu’il s’agit de traduire vers une langue morphologiquement riche (ou complexe). Le but de notre travail est de développer un système statistique de traduction automatique comme solution pour relever les défis soulevés par la complexité morphologique. Dans ce mémoire, nous examinons, dans un premier temps, un certain nombre de méthodes considérées comme des extensions aux systèmes de traduction traditionnels et nous évaluons leurs performances. Cette évaluation est faite par rapport aux systèmes à l’état de l’art (système de référence) et ceci dans des tâches de traduction anglais-inuktitut et anglais-finnois. Nous développons ensuite un nouvel algorithme de segmentation qui prend en compte les informations provenant de la paire de langues objet de la traduction. Cet algorithme de segmentation est ensuite intégré dans le modèle de traduction à base d’unités lexicales « Phrase-Based Models » pour former notre système de traduction à base de séquences de segments. Enfin, nous combinons le système obtenu avec des algorithmes de post-traitement pour obtenir un système de traduction complet. Les résultats des expériences réalisées dans ce mémoire montrent que le système de traduction à base de séquences de segments proposé permet d’obtenir des améliorations significatives au niveau de la qualité de la traduction en terme de le métrique d’évaluation BLEU (Papineni et al., 2002) et qui sert à évaluer. Plus particulièrement, notre approche de segmentation réussie à améliorer légèrement la qualité de la traduction par rapport au système de référence et une amélioration significative de la qualité de la traduction est observée par rapport aux techniques de prétraitement de base (baseline).