861 resultados para Multi Domain Information Model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is growing popularity in the use of composite indices and rankings for cross-organizational benchmarking. However, little attention has been paid to alternative methods and procedures for the computation of these indices and how the use of such methods may impact the resulting indices and rankings. This dissertation developed an approach for assessing composite indices and rankings based on the integration of a number of methods for aggregation, data transformation and attribute weighting involved in their computation. The integrated model developed is based on the simulation of composite indices using methods and procedures proposed in the area of multi-criteria decision making (MCDM) and knowledge discovery in databases (KDD). The approach developed in this dissertation was automated through an IT artifact that was designed, developed and evaluated based on the framework and guidelines of the design science paradigm of information systems research. This artifact dynamically generates multiple versions of indices and rankings by considering different methodological scenarios according to user specified parameters. The computerized implementation was done in Visual Basic for Excel 2007. Using different performance measures, the artifact produces a number of excel outputs for the comparison and assessment of the indices and rankings. In order to evaluate the efficacy of the artifact and its underlying approach, a full empirical analysis was conducted using the World Bank's Doing Business database for the year 2010, which includes ten sub-indices (each corresponding to different areas of the business environment and regulation) for 183 countries. The output results, which were obtained using 115 methodological scenarios for the assessment of this index and its ten sub-indices, indicated that the variability of the component indicators considered in each case influenced the sensitivity of the rankings to the methodological choices. Overall, the results of our multi-method assessment were consistent with the World Bank rankings except in cases where the indices involved cost indicators measured in per capita income which yielded more sensitive results. Low income level countries exhibited more sensitivity in their rankings and less agreement between the benchmark rankings and our multi-method based rankings than higher income country groups.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this study was to develop a GIS-based multi-class index overlay model to determine areas susceptible to inland flooding during extreme precipitation events in Broward County, Florida. Data layers used in the method include Airborne Laser Terrain Mapper (ALTM) elevation data, excess precipitation depth determined through performing a Soil Conservation Service (SCS) Curve Number (CN) analysis, and the slope of the terrain. The method includes a calibration procedure that uses "weights and scores" criteria obtained from Hurricane Irene (1999) records, a reported 100-year precipitation event, Doppler radar data and documented flooding locations. Results are displayed in maps of Eastern Broward County depicting types of flooding scenarios for a 100-year, 24-hour storm based on the soil saturation conditions. As expected the results of the multi-class index overlay analysis showed that an increase for the potential of inland flooding could be expected when a higher antecedent moisture condition is experienced. The proposed method proves to have some potential as a predictive tool for flooding susceptibility based on a relatively simple approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In questo studio, un multi-model ensemble è stato implementato e verificato, seguendo una delle priorità di ricerca del Subseasonal to Seasonal Prediction Project (S2S). Una regressione lineare è stata applicata ad un insieme di previsioni di ensemble su date passate, prodotte dai centri di previsione mensile del CNR-ISAC e ECMWF-IFS. Ognuna di queste contiene un membro di controllo e quattro elementi perturbati. Le variabili scelte per l'analisi sono l'altezza geopotenziale a 500 hPa, la temperatura a 850 hPa e la temperatura a 2 metri, la griglia spaziale ha risoluzione 1 ◦ × 1 ◦ lat-lon e sono stati utilizzati gli inverni dal 1990 al 2010. Le rianalisi di ERA-Interim sono utilizzate sia per realizzare la regressione, sia nella validazione dei risultati, mediante stimatori nonprobabilistici come lo scarto quadratico medio (RMSE) e la correlazione delle anomalie. Successivamente, tecniche di Model Output Statistics (MOS) e Direct Model Output (DMO) sono applicate al multi-model ensemble per ottenere previsioni probabilistiche per la media settimanale delle anomalie di temperatura a 2 metri. I metodi MOS utilizzati sono la regressione logistica e la regressione Gaussiana non-omogenea, mentre quelli DMO sono il democratic voting e il Tukey plotting position. Queste tecniche sono applicate anche ai singoli modelli in modo da effettuare confronti basati su stimatori probabilistici, come il ranked probability skill score, il discrete ranked probability skill score e il reliability diagram. Entrambe le tipologie di stimatori mostrano come il multi-model abbia migliori performance rispetto ai singoli modelli. Inoltre, i valori più alti di stimatori probabilistici sono ottenuti usando una regressione logistica sulla sola media di ensemble. Applicando la regressione a dataset di dimensione ridotta, abbiamo realizzato una curva di apprendimento che mostra come un aumento del numero di date nella fase di addestramento non produrrebbe ulteriori miglioramenti.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The business model of an organization is an important strategic tool for its success, and should therefore be understood by business professionals and information technology professionals. By this context and considering the importance of information technology in contemporary business models, this article aims to verify the use of the business model components in the information technology (IT) projects management process in enterprises. To achieve this goal, this exploratory research has investigated the use of the Business Model concept in the information technology projects management, by a survey applied to 327 professionals from February to April 2012. It was observed that the business model concept, as well as its practices or its blocks, are not so well explored in its whole potential, possibly because it is relatively new. One of the benefits of this conceptual tool is to provide an understanding in terms of the core business for different areas, enabling a higher level of knowledge in terms of the essential activities of the enterprise IT professionals and the business area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Transient simulations are widely used in studying the past climate as they provide better comparison with any exisiting proxy data. However, multi-millennial transient simulations using coupled climate models are usually computationally very expensive. As a result several acceleration techniques are implemented when using numerical simulations to recreate past climate. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere. The data provided here are from both accelerated and non-accelerated runs as decadal mean values.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Previous research has highlighted the importance of positive physical activity (PA) behaviors during childhood to promote sustained active lifestyles throughout the lifespan (Telama et al. 2005; 2014). It is in this context that the role of schools and teachers in facilitating PA education is promoted. Research suggests that teachers play an important role in the attitudes of children towards PA (Figley 1985) and schools may be an efficient vehicle for PA provision and promotion (McGinnis, Kanner and DeGraw, 1991; Wechsler, Deveraux, Davis and Collins, 2000). Yet despite consensus that schools represent an ideal setting from which to ‘reach’ young people (Department of Health and Human Services, UK, 2012) there remains conceptual (e.g. multi-component intervention) and methodological (e.g. duration, intensity, family involvement) ambiguity regarding the mechanisms of change claimed by PA intervention programmes. This may, in part, contribute to research findings that suggest that PA interventions have had limited impact on children’s overall activity levels and thereby limited impact in reducing children’s metabolic health (Metcalf, Henley & Wilkin, 2012). A marked criticism of the health promotion field has been the focus on behavioural change while failing to acknowledge the impact of context in influencing health outcomes (Golden & Earp, 2011). For years, the trans-theoretical model of behaviour change has been ‘the dominant model for health behaviour change’ (Armitage, 2009); this model focusses primarily on the individual and the psychology of the change process. Arguably, this model is limited by the individual’s decision-making ability and degree of self-efficacy in order to achieve sustained behavioural change and does not take account of external factors that may hinder their ability to realise change. Similar to the trans-theoretical model, socio-ecological models identify the individual at the focal point of change but also emphasises the importance of connecting multiple impacting variables, in particular, the connections between the social environment, the physical environment and public policy in facilitating behavioural change (REF). In this research, a social-ecological framework was used to connect the ways a PA intervention programme had an impact (or not) on participants, and to make explicit the foundational features of the programme that facilitated positive change. In this study, we examined the evaluation of a multi-agency approach to a PA intervention programme which aimed to increase physical activity, and awareness of the importance of physical activity to key stage 2 (age 7-12) pupils in three UK primary schools. The agencies involved were the local health authority, a community based charitable organisation, a local health administrative agency, and the city school district. In examining the impact of the intervention, we adopted a process evaluation model in order to better understand the mechanisms and context that facilitated change. Therefore, the aim of this evaluation was to describe the provision, process and impact of the intervention by 1) assessing changes in physical activity levels 2) assessing changes in the student’s attitudes towards physical activity, 3) examining student’s perceptions of the child size fitness equipment in school and their likelihood of using the equipment outside of school and 4) exploring staff perceptions, specifically the challenges and benefits, of facilitating equipment based exercise sessions in the school environment. Methodology, Methods, Research Instruments or Sources Used Evaluation of the intervention was designed as a matched-control study and was undertaken over a seven-month period. The school-based intervention involved 3 intervention schools (n =436; 224 boys) and one control school (n=123; 70 boys) in a low socioeconomic and multicultural urban setting. The PA intervention was separated into two phases: a motivation DVD and 10 days of circuit based exercise sessions (Phase 1) followed by a maintenance phase (Phase 2) that incorporated a PA reward program and the use of specialist kid’s gym equipment located at each school for a period of 4 wk. Outcome measures were measured at baseline (January) and endpoint (July; end of academic school year) using reliable and valid self-report measures. The children’s attitudes towards PA were assessed using the Children’s Attitudes towards Physical Activity (CATPA) questionnaire. The Physical Activity Questionnaire for Children (PAQ-C), a 7-day recall questionnaire, was used to assess PA levels over a school week. A standardised test battery (Fitnessgram®) was used to assess cardiovascular fitness, body composition, muscular strength and endurance, and flexibility. After the 4 wk period, similar kid’s equipment was available for general access at local community facilities. The control school did not receive any of the interventions. All physical fitness tests and PA questionnaires were administered and collected prior to the start of the intervention (January) and following the intervention period (July) by an independent evaluation team. Evaluation testing took place at the individual schools over 2-3 consecutive days (depending on the number of children to be tested at the school). Staff (n=19) and student perceptions (n = 436) of the child sized fitness equipment were assessed via questionnaires post-intervention. Students completed a questionnaire to assess enjoyment, usage, ease of use and equipment assess and usage in the community. A questionnaire assessed staff perceptions on the delivery of the exercise sessions, classroom engagement and student perceptions. Conclusions, Expected Outcomes or Findings Findings showed that both the intervention (16.4%) and control groups increased their PAQ-C score by post-intervention (p < 0.05); with the intervention (17.8%) and control (21.3%) boys showing the greatest increase in physical activity levels. At post-intervention, there was a 5.5% decline in the intervention girls’ attitudes toward PA in the aesthetic subdomains (p = 0.009); whereas the control boys had an increase in positive attitudes in the health domain (p = 0.003). No significant differences in attitudes towards physical activity were observed in any other domain for either group at post-intervention (p > 0.05). The results of the equipment questionnaire, 96% of the children stated they enjoyed using the equipment and would like to use the equipment again in the future; however at post-intervention only 27% reported using the equipment outside of school in the last 7 days. Students identified the ski walker (34%) and cycle (32%) as their favorite pieces of equipment; with the single joint exercises such as leg extension and bicep/tricep machine (<3%) as their least favorite. Key themes from staff were that the equipment sessions were enjoyable, a novel activity, children felt very grown-up, and the activity was linked to a real fitness experience. They also expressed the need for more support to deliver the sessions and more time required for each session. Findings from this study suggest that a more integrated approach within the various agencies is required, particularly more support to increase teachers pedagogical content knowledge in physical activity instruction which is age appropriate. Future recommendations for successful implementation include sufficient time period for all students to access and engage with the equipment; increased access and marketing of facilities to parents within the local community, and professional teacher support strategies to facilitate the exercise sessions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

40.00% 40.00%

Publicador:

Resumo:

According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.