897 resultados para Simulation-based methods
Resumo:
Investment decision-making on far-reaching innovation ideas is one of the key challenges practitioners and academics face in the field of innovation management. However, the management practices and theories strongly rely on evaluation systems that do not fit in well with this setting. These systems and practices normally cannot capture the value of future opportunities under high uncertainty because they ignore the firm’s potential for growth and flexibility. Real options theory and options-based methods have been offered as a solution to facilitate decision-making on highly uncertain investment objects. Much of the uncertainty inherent in these investment objects is attributable to unknown future events. In this setting, real options theory and methods have faced some challenges. First, the theory and its applications have largely been limited to market-priced real assets. Second, the options perspective has not proved as useful as anticipated because the tools it offers are perceived to be too complicated for managerial use. Third, there are challenges related to the type of uncertainty existing real options methods can handle: they are primarily limited to parametric uncertainty. Nevertheless, the theory is considered promising in the context of far-reaching and strategically important innovation ideas. The objective of this dissertation is to clarify the potential of options-based methodology in the identification of innovation opportunities. The constructive research approach gives new insights into the development potential of real options theory under non-parametric and closeto- radical uncertainty. The distinction between real options and strategic options is presented as an explanans for the discovered limitations of the theory. The findings offer managers a new means of assessing future innovation ideas based on the frameworks constructed during the course of the study.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Tässä pro gradu – tutkielmassa tutkin kuvataiteen ja kuvataiteellisten menetelmien käyttöä organisaatioissa psykologisen omistajuuden tarpeiden ilmentäjänä. Tarkoituksenani on kuroa umpeen aukkoa tutkimuksen ja käytännön välillä mitä tulee kuvataiteen käyttöön organisaatioissa. Tavoitteena on selvittää, mitä lisäarvoa kuvataiteen käyttö tuo organisaatioille ja miten se ilmentää psykologista omistajuutta. Tutkimus on laadullista ja aineistona ovat strukturoimattomat haastattelut, jotka on analysoitu diskurssinanalyysillä. Haastatteluaineisosta löysin eritasoisia diskursseja. Päädiskurssi näkymättömästä näkyväksi ilmentää psykologiseen omistajuuteen motivoivista tarpeista stimuluksen tarvetta, tilan diskurssi ilmentää kodin tarvetta ja identiteetin diskurssi ilmentää identiteetin tarvetta. Tilan ja identiteetin diskurssit menevät osittain päällekkäin. Kuvataideteokset ilmentävät psykologisen omistajuuden motivaatiotarpeista erityisesti stimulusta. Ne toimivat stimuluksena tuomalla psykologista läheisyyttä organisaatioihin. Kuvataiteen käytöllä organisaatioissa saadaan näkymättömästä näkyväksi psykologiseen omistajuuteen motivoivia tarpeita. Kuvataideteokset tuovat psykologista läheisyyttä ja stimuloivat näihin liittyviä merkityksellisiä asioita. Kuvataide on esteettinen käytännön työkalu organisaatiokäyttäytymisen kehittämiseksi, tunnejohtamiseen fuusioissa ja henkilöstön sitouttamisee
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The hemochromatosis gene, HFE, is located on chromosome 6 in close proximity to the HLA-A locus. Most Caucasian patients with hereditary hemochromatosis (HH) are homozygous for HLA-A3 and for the C282Y mutation of the HFE gene, while a minority are compound heterozygotes for C282Y and H63D. The prevalence of these mutations in non-Caucasian patients with HH is lower than expected. The objective of the present study was to evaluate the frequencies of HLA-A antigens and the C282Y and H63D mutations of the HFE gene in Brazilian patients with HH and to compare clinical and laboratory profiles of C282Y-positive and -negative patients with HH. The frequencies of HLA-A and C282Y and H63D mutations were determined by PCR-based methods in 15 male patients (median age 44 (20-72) years) with HH. Eight patients (53%) were homozygous and one (7%) was heterozygous for the C282Y mutation. None had compound heterozygosity for C282Y and H63D mutations. All but three C282Y homozygotes were positive for HLA-A3 and three other patients without C282Y were shown to be either heterozygous (N = 2) or homozygous (N = 1) for HLA-A3. Patients homozygous for the C282Y mutation had higher ferritin levels and lower age at onset, but the difference was not significant. The presence of C282Y homozygosity in roughly half of the Brazilian patients with HH, together with the findings of HLA-A homozygosity in C282Y-negative subjects, suggest that other mutations in the HFE gene or in other genes involved in iron homeostasis might also be linked to HH in Brazil.
Resumo:
The distribution of polymorphisms related to glutathione S-transferases (GST) has been described in different populations, mainly for white individuals. We evaluated the distribution of GST mu (GSTM1) and theta (GSTT1) genotypes in 594 individuals, by multiplex PCR-based methods, using amplification of the exon 7 of CYP1A1 gene as an internal control. In São Paulo, 233 whites, 87 mulattos, and 137 blacks, all healthy blood-donor volunteers, were tested. In Bahia, where black and mulatto populations are more numerous, 137 subjects were evaluated. The frequency of the GSTM1 null genotype was significantly higher among whites (55.4%) than among mulattos (41.4%; P = 0.03) and blacks (32.8%; P < 0.0001) from São Paulo, or Bahian subjects in general (35.7%; P = 0.0003). There was no statistically different distribution among any non-white groups. The distribution of GSTT1 null genotype among groups did not differ significantly. The agreement between self-reported and interviewer classification of skin color in the Bahian group was low. The interviewer classification indicated a gradient of distribution of the GSTM1 null genotype from whites (55.6%) to light mulattos (40.4%), dark mulattos (32.0%) and blacks (28.6%). However, any information about race or ethnicity should be considered with caution regarding the bias introduced by different data collection techniques, specially in countries where racial admixture is intense, and ethnic definition boundaries are loose. Because homozygous deletions of GST gene might be associated with cancer risk, a better understanding of chemical metabolizing gene distribution can contribute to risk assessment of humans exposed to environmental carcinogens.
Resumo:
The physiochemical and biological properties of honey are directly associated to its floral origin. Some current commonly used methods for identification of botanical origin of honey involve palynological analysis, chromatographic methods, or direct observation of the bee behavior. However, these methods can be less sensitive and time consuming. DNA-based methods have become popular due to their simplicity, quickness, and reliability. The main objective of this research is to introduce a protocol for the extraction of DNA from honey and demonstrate that the molecular analysis of the extracted DNA can be used for its botanical identification. The original CTAB-based protocol for the extraction of DNA from plants was modified and used in the DNA extraction from honey. DNA extraction was carried out from different honey samples with similar results in each replication. The extracted DNA was amplified by PCR using plant specific primers, confirming that the DNA extracted using the modified protocol is of plant origin and has good quality for analysis of PCR products and that it can be used for botanical identification of honey.
Resumo:
Social insects are known for their ability to display swarm intelligence, where the cognitive capabilities of the collective surpass those of the individuals forming it by orders of magnitude. The rise of crowdsourcing in recent years has sparked speculation as to whether something similar might be taking place on crowdsourcing sites, where hundreds or thousands of people interact with each other. The phenomenon has been dubbed collective intelligence. This thesis focuses on exploring the role of collective intelligence in crowdsourcing innovations. The task is approached through three research questions: 1) what is collective intelligence; 2) how is collective intelligence manifested in websites involved in crowdsourcing innovation; and 3) how important is collective intelligence for the functioning of the crowdsourcing sites. After developing a theoretical framework for collective intelligence, a multiple case study is conducted using an ethnographic data collection approach for the most part. A variety of qualitative, quantitative and simulation modelling methods are used to analyse the complex phenomenon from several theoretical viewpoints or ‘lenses’. Two possible manifestations of collective intelligence are identified: discussion, typical of web forums; and the wisdom of crowds in evaluating crowd submissions to websites. However, neither of these appears to be specific to crowdsourcing or critical for the functioning of the sites. Collective intelligence appears to play only a minor role in the cases investigated here. In addition, this thesis shows that feedback loops, which are found in all the cases investigated, reduce the accuracy of the crowd’s evaluations when a count of votes is used for aggregation.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
Kandidaatin tutkielma ”Hinnoittelustrategian valinta terästeollisuudessa – Case Teräsyhtiö Oy” käsittelee tuotantohyödykkeitä valmistavan hiiliteräsyhtiön hinnoittelustrategian valintaa, peilaamalla toteutuvaa hinnoittelua hinnoittelun teoreettisiin periaatteisiin. Tutkimuksen tavoitteeksi kohdentui selvittää, kuinka hyvin kohdeyrityksen hinnoittelu noudattaa hinnoittelun teoreettisia periaatteita. Tavoitteena oli myös selvittää, miten hinnoittelu toteutuu kohdeyrityksessä ja mitkä tekijät vaikuttavat tähän strategiseen päätökseen. Tutkielman teoriaosuuden muodostaa hinnan ja hinnoitteluprosessin teorian muodostama kokonaisuus yhdessä tutkielman varsinaisen viitekehyksen kanssa, jona toimivat kustannusperusteisen ja markkinalähtöisen hinnoittelun perinteiset mallit. Tutkielmassa markkinalähtöisillä malleilla viitataan kysyntä- ja kilpailulähtöisiin malleihin. Tutkimuksen aineisto kerättiin teema-haastattelun avulla, haastattelemalla kolmea Case-yrityksen hinnoittelussa toimivaa henkilöä. Tutkimus toteutui laadullisena tutkimuksena hyödyntäen analyysissä teorialähtöistä sisällönanalyysiä. Tutkimustulosten osalta tärkeään rooliin asettui kahtiajako kotimarkkinoihin ja kotimarkkinoiden ulkopuolisiin alueisiin. Nämä alueet määrittivät pitkälle sitä, jouduttiinko hinnoittelu toteuttamaan hintaa seuraten vai asettaen. Toimialan alueellisten hintatasojen osalta merkittävässä asemassa olivat teräsyhtiöiden avoimet hintalistat, jotka ohjaavat hinnoittelua vahvasti. Hinnoitteluprosessin osalta tärkeimmäksi tavoitteeksi asettui kannattavuuden takaaminen, sekä johdonmukaisen hinnoittelun harjoittaminen. Markkinalähtöistä hinnoittelua ohjaavista ulkoisista tekijöistä tärkeintä oli Kilpailulain (948/2011) huomioiminen. Asiakkaan rooli hinnoittelussa oli myös erittäin merkittävä. Tutkimus osoitti kohdeyrityksen hinnoittelun painottavan markkinalähtöisiä menetelmiä, huomioiden kuitenkin kustannusten vaikutus katteen kautta. Tutkimus osoitti myös, ettei asiakkaan kokemaa arvoa huomioida hinnoittelun pohjatyössä siinä määrin, kuin olisi mahdollisesti tarpeellista. Tutkimuksen johtopäätöksissä korostuu se, kuinka asiakasarvon huomioiminen voisi mahdollistaa yritykselle korkeamman tuloksellisuuden.
Resumo:
Ohjelmoinnin opettaminen yleissivistävänä oppiaineena on viime aikoina herättänyt kiinnostusta Suomessa ja muualla maailmassa. Esimerkiksi Suomen opetushallituksen määrittämien, vuonna 2016 käyttöön otettavien peruskoulun opintosuunnitelman perusteiden mukaan, ohjelmointitaitoja aletaan opettaa suomalaisissa peruskouluissa ensimmäiseltä luokalta alkaen. Ohjelmointia ei olla lisäämässä omaksi oppiaineekseen, vaan sen opetuksen on tarkoitus tapahtua muiden oppiaineiden, kuten matematiikan yhteydessä. Tämä tutkimus käsittelee yleissivistävää ohjelmoinnin opetusta yleisesti, käy läpi yleisimpiä haasteita ohjelmoinnin oppimisessa ja tarkastelee erilaisten opetusmenetelmien soveltuvuutta erityisesti nuorten oppilaiden opettamiseen. Tutkimusta varten toteutettiin verkkoympäristössä toimiva, noin 9–12-vuotiaille oppilaille suunnattu graafista ohjelmointikieltä ja visuaalisuutta tehokkaasti hyödyntävä oppimissovellus. Oppimissovelluksen avulla toteutettiin alakoulun neljänsien luokkien kanssa vertailututkimus, jossa graafisella ohjelmointikielellä tapahtuvan opetuksen toimivuutta vertailtiin toiseen opetusmenetelmään, jossa oppilaat tutustuivat ohjelmoinnin perusteisiin toiminnallisten leikkien avulla. Vertailututkimuksessa kahden neljännen luokan oppilaat suorittivat samankaltaisia, ohjelmoinnin peruskäsitteisiin liittyviä ohjelmointitehtäviä molemmilla opetus-menetelmillä. Tutkimuksen tavoitteena oli selvittää alakouluoppilaiden nykyistä ohjelmointiosaamista, sitä minkälaisen vastaanoton ohjelmoinnin opetus alakouluoppilailta saa, onko erilaisilla opetusmenetelmillä merkitystä opetuksen toteutuksen kannalta ja näkyykö eri opetusmenetelmillä opetettujen luokkien oppimistuloksissa eroja. Oppilaat suhtautuivat kumpaankin opetusmenetelmään myönteisesti, ja osoittivat kiinnostusta ohjelmoinnin opiskeluun. Sisällöllisesti oppitunneille oli varattu turhan paljon materiaalia, mutta esimerkiksi yhden keskeisimmän aiheen, eli toiston käsitteen oppimisessa aktiivisilla leikeillä harjoitellut luokka osoitti huomattavasti graafisella ohjelmointikielellä harjoitellutta luokkaa parempaa osaamista oppitunnin jälkeen. Ohjelmakoodin peräkkäisyyteen liittyvä osaaminen oli neljäsluokkalaisilla hyvin hallussa jo ennen ohjelmointiharjoituksia. Aiheeseen liittyvän taustatutkimuksen ja luokkien opettajien haastatteluiden perusteella havaittiin koulujen valmiuksien opetussuunnitelmauudistuksen mukaiseen ohjelmoinnin opettamiseen olevan vielä heikolla tasolla.
Resumo:
Polyglutamine is a naturally occurring peptide found within several proteins in neuronal cells of the brain, and its aggregation has been implicated in several neurodegenerative diseases, including Huntington's disease. The resulting aggregates have been demonstrated to possess ~-sheet structure, and aggregation has been shown to start with a single misfolded peptide. The current project sought to computationally examine the structural tendencies of three mutant poly glutamine peptides that were studied experimentally, and found to aggregate with varying efficiencies. Low-energy structures were generated for each peptide by simulated annealing, and were analyzed quantitatively by various geometry- and energy-based methods. According to the results, the experimentally-observed inhibition of aggregation appears to be due to localized conformational restraint placed on the peptide backbone by inserted prolines, which in tum confines the peptide to native coil structure, discouraging transition towards the ~sheet structure required for aggregation. Such knowledge could prove quite useful to the design of future treatments for Huntington's and other related diseases.
Resumo:
Evidence suggests that children with developmental coordination disorder (DCD) have lower levels of cardiorespiratory fitness (CRF) compared to children without the condition. However, these studies were restricted to field-based methods in order to predict V02 peak in the determination of CRF. Such field tests have been criticised for their ability to provide a valid prediction of V02 peak and vulnerability to psychological aspects in children with DCD, such as low perceived adequacy toward physical activity. Moreover, the contribution of physical activity to the variance in V02 peak between the two groups is unknown. The purpose of our study was to determine the mediating role of physical activity and perceived adequacy towards physical activity on V02 peak in children with significant motor impairments. This prospective case-control design involved 122 (age 12-13 years) children with significant motor impairments (n=61) and healthy matched controls (n=61) based on age, gender and school location. Participants had been previously assessed for motor proficiency and classified as a probable DCD (p-DCD) or healthy control using the movement ABC test. V02 peak was measured by a progressive exercise test on a cycle ergometer. Perceived adequacy was measured using a 7 -item subscale from Children's Selfperception of Adequacy and Predilection for Physical Activity scale. Physical activity was monitored for seven days with the Actical® accelerometer. Children with p-DCD had significantly lower V02 peak (48.76±7.2 ml/ffm/min; p:50.05) compared to controls (53.12±8.2 ml/ffm/min), even after correcting for fat free mass. Regression analysis demonstrated that perceived adequacy and physical activity were significant mediators in the relationship between p-DCD and V02 peak. In conclusion, using a stringent laboratory assessment, the results of the current study verify the findings of earlier studies, adding low CRF to the list of health consequences associated with DCD. It seems that when testing for CRF in this population, there is a need to consider the psychological barriers associated with their condition. Moreover, strategies to increase physical activity in children with DCD may result in improvement in their CRF.
Resumo:
This is a study exploring teenaged girls’ understanding and experiences of cyberbullying as a contemporary social phenomenon. Participants included 4 Grade 11 and 12 girls from a medium-sized independent school in southwestern Ontario, Canada. The girls participated in 9 extracurricular study sessions from January to April 2013. During the sessions, they engaged with Drama for Social Intervention (Clark, 2009; Conrad, 2004; Lepp, 2011) activities with the intended goal of producing a collective creation. Qualitative data were collected throughout the sessions using fieldnotes, participant journals, interviews, and participant artefacts. The findings are presented as an ethnodrama (Campbell & Conrad, 2006; Denzin, 2003; Saldaña, 1999) with each thematic statement forming a title of a scene in the script (Rogers, Frellick, & Babinski, 2002). The study found that girl identity online consists of many disconnected avatars. It also suggested that distancing (Eriksson, 2011) techniques, used to engender safety in Drama for Social Intervention, might have contributed to participant disengagement with the study’s content. Implications for further research included the utility of arts-based methods to promote participants’ feelings of growth and reflection, and a reevaluation of cyberbullying discourses to better reflect girls’ multiple avatar identities. Implications for teachers and administrators encompassed a need for preventative approaches to cyberbullying education, incorporating affective empathy-building (Ang & Goh, 2010) and addressing girls’ feelings of safety in perceived anonymity online.