898 resultados para Discrete Regression and Qualitative Choice Models
                                
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
                                
Resumo:
Despite abundant literature on human behaviour in the face of danger, much remains to be discovered. Some descriptive models of behaviour in the face of danger are reviewed in order to identify areas where documentation is lacking. It is argued that little is known about recognition and assessment of danger and yet, these are important aspects of cognitive processes. Speculative arguments about hazard assessment are reviewed and tested against the results of previous studies. Once hypotheses are formulated, the reason for retaining the reportory grid as the main research instrument are outlined, and the choice of data analysis techniques is described. Whilst all samples used repertory grids, the rating scales were different between samples; therefore, an analysis is performed of the way in which rating scales were used in the various samples and of some reasons why the scales were used differently. Then, individual grids are looked into and compared between respondents within each sample; consensus grids are also discussed. the major results from all samples are then contrasted and compared. It was hypothesized that hazard assessment would encompass three main dimensions, i.e. 'controllability', 'severity of consequences' and 'likelihood of occurrence', which would emerge in that order. the results suggest that these dimensions are but facets of two broader dimensions labelled 'scope of human intervention' and 'dangerousness'. It seems that these two dimensions encompass a number of more specific dimensions some of which can be further fragmented. Thus, hazard assessment appears to be a more complex process about which much remains to be discovered. Some of the ways in which further discovery might proceed are discussed.
                                
Resumo:
This paper presents a novel prosody model in the context of computer text-to-speech synthesis applications for tone languages. We have demonstrated its applicability using the Standard Yorùbá (SY) language. Our approach is motivated by the theory that abstract and realised forms of various prosody dimensions should be modelled within a modular and unified framework [Coleman, J.S., 1994. Polysyllabic words in the YorkTalk synthesis system. In: Keating, P.A. (Ed.), Phonological Structure and Forms: Papers in Laboratory Phonology III, Cambridge University Press, Cambridge, pp. 293–324]. We have implemented this framework using the Relational Tree (R-Tree) technique. R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. The underlying assumption of this research is that it is possible to develop a practical prosody model by using appropriate computational tools and techniques which combine acoustic data with an encoding of the phonological and phonetic knowledge provided by experts. To implement the intonation dimension, fuzzy logic based rules were developed using speech data from native speakers of Yorùbá. The Fuzzy Decision Tree (FDT) and the Classification and Regression Tree (CART) techniques were tested in modelling the duration dimension. For practical reasons, we have selected the FDT for implementing the duration dimension of our prosody model. To establish the effectiveness of our prosody model, we have also developed a Stem-ML prosody model for SY. We have performed both quantitative and qualitative evaluations on our implemented prosody models. The results suggest that, although the R-Tree model does not predict the numerical speech prosody data as accurately as the Stem-ML model, it produces synthetic speech prosody with better intelligibility and naturalness. The R-Tree model is particularly suitable for speech prosody modelling for languages with limited language resources and expertise, e.g. African languages. Furthermore, the R-Tree model is easy to implement, interpret and analyse.
                                
Resumo:
Bovine tuberculosis (bTB) caused by infection with Mycobacterium bovis is causing considerable economic loss to farmers and Government in the United Kingdom as its incidence is increasing. Efforts to control bTB in the UK are hampered by the infection in Eurasian badgers (Metes metes) that represent a wildlife reservoir and source of recurrent M. bovis exposure to cattle. Vaccination of badgers with the human TB vaccine, M. bovis Bacille Calmette-Guerin (BCG), in oral bait represents a possible disease control tool and holds the best prospect for reaching badger populations over a wide geographical area. Using mouse and guinea pig models, we evaluated the immunogenicity and protective efficacy, respectively, of candidate badger oral vaccines based on formulation of BCG in lipid matrix, alginate beads, or a novel microcapsular hybrid of both lipid and alginate. Two different oral doses of BCG were evaluated in each formulation for their protective efficacy in guinea pigs, while a single dose was evaluated in mice. In mice, significant immune responses (based on lymphocyte proliferation and expression of IFN-gamma) were only seen with the lipid matrix and the lipid in alginate microcapsular formulation, corresponding to the isolation of viable BCG from alimentary tract lymph nodes. In guinea pigs, only BCG formulated in lipid matrix conferred protection to the spleen and lungs following aerosol route challenge with M. bovis. Protection was seen with delivery doses in the range 10(6)-10(7) CFU, although this was more consistent in the spleen at the higher dose. No protection in terms of organ CFU was seen with BCG administered in alginate beads or in lipid in alginate microcapsules, although 10(7) in the latter formulation conferred protection in terms of increasing body weight after challenge and a smaller lung to body weight ratio at necropsy. These results highlight the potential for lipid, rather than alginate, -based vaccine formulations as suitable delivery vehicles for an oral BCG vaccine in badgers.
                                
Resumo:
Purpose: Our study explores the mediating role of discrete emotions in the relationships between employee perceptions of distributive and procedural injustice, regarding an annual salary raise, and counterproductive work behaviors (CWBs). Design/Methodology/Approach: Survey data were provided by 508 individuals from telecom and IT companies in Pakistan. Confirmatory factor analysis, structural equation modeling, and bootstrapping were used to test our hypothesized model. Findings: We found a good fit between the data and our tested model. As predicted, anger (and not sadness) was positively related to aggressive CWBs (abuse against others and production deviance) and fully mediated the relationship between perceived distributive injustice and these CWBs. Against predictions, however, neither sadness nor anger was significantly related to employee withdrawal. Implications: Our findings provide organizations with an insight into the emotional consequences of unfair HR policies, and the potential implications for CWBs. Such knowledge may help employers to develop training and counseling interventions that support the effective management of emotions at work. Our findings are particularly salient for national and multinational organizations in Pakistan. Originality/Value: This is one of the first studies to provide empirical support for the relationships between in/justice, discrete emotions and CWBs in a non-Western (Pakistani) context. Our study also provides new evidence for the differential effects of outward/inward emotions on aggressive/passive CWBs. © 2012 Springer Science+Business Media, LLC.
                                
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
                                
Resumo:
Strategic sourcing has increased in importance in recent years, and now plays an important role in companies’ planning. The current volatility in supply markets means companies face multiple challenges involving lock-in situations, supplier bankruptcies or supply security issues. In addition, their exposure can increase due to natural disasters, as witnessed recently in the form of bird flu, volcanic ash and tsunamis. Therefore, the primary focus of this study is risk management in the context of strategic sourcing. The study presents a literature review on sourcing based on the 15 years from 1998–2012, and considers 131 academic articles. The literature describes strategic sourcing as a strategic, holistic process in managing supplier relationships, with a long-term focus on adding value to the company and realising competitive advantage. Few studies discovered the real risk impact and status of risk management in strategic sourcing, and evaluation across countries and industries was limited, with the construction sector particularly under-researched. This methodology is founded on a qualitative study of twenty cases across Ger-many and the United Kingdom from the construction sector and electronics manufacturing industries. While considering risk management in the context of strategic sourcing, the thesis takes into account six dimensions that cover trends in strategic sourcing, theoretical and practical sourcing models, risk management, supply and demand management, critical success factors and the strategic supplier evaluation. The study contributes in several ways. First, recent trends are traced and future needs identified across the research dimensions of countries, industries and companies. Second, it evaluates critical success factors in contemporary strategic sourcing. Third, it explores the application of theoretical and practical sourcing models in terms of effectiveness and sustainability. Fourth, based on the case study findings, a risk-oriented strategic sourcing framework and a model for strategic sourcing are developed. These are based on the validation of contemporary requirements and a critical evaluation of the existing situation. It contemplates the empirical findings and leads to a structured process to manage risk in strategic sourcing. The risk-oriented framework considers areas such as trends, corporate and sourcing strategy, critical success factors, strategic supplier selection criteria, risk assessment, reporting, strategy alignment and reporting. The proposed model highlights the essential dimensions in strategic sourcing and guides us to a new definition of strategic sourcing supported by this empirical study.
                                
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
                                
Resumo:
Purpose: This paper aims to contribute to the understanding of the factors that influence small to medium-sized enterprise (SME) performance and particularly, growth. Design/methodology/approach: This paper utilises an original data set of 360 SMEs employing 5-249 people to run logit regression models of employment growth, turnover growth and profitability. The models include characteristics of the businesses, the owner-managers and their strategies. Findings: The results suggest that size and age of enterprise dominate performance and are more important than strategy and the entrepreneurial characteristics of the owner. Having a business plan was also found to be important. Research limitations/implications: The results contribute to the development of theoretical and knowledge bases, as well as offering results that will be of interest to research and policy communities. The results are limited to a single survey, using cross-sectional data. Practical implications: The findings have a bearing on business growth strategy for policy makers. The results suggest that policy measures that promote the take-up of business plans and are targeted at younger, larger-sized businesses may have the greatest impact in terms of helping to facilitate business growth. Originality/value: A novel feature of the models is the incorporation of entrepreneurial traits and whether there were any collaborative joint venture arrangements. © Emerald Group Publishing Limited.
                                
Resumo:
A combination of the two-fluid and drift flux models have been used to model the transport of fibrous debris. This debris is generated during loss of coolant accidents in the primary circuit of pressurized or boiling water nuclear reactors, as high pressure steam or water jets can damage adjacent insulation materials including mineral wool blankets. Fibre agglomerates released from the mineral wools may reach the containment sump strainers, where they can accumulate and compromise the long-term operation of the emergency core cooling system. Single-effect experiments of sedimentation in a quiescent rectangular column and sedimentation in a horizontal flow are used to verify and validate this particular application of the multiphase numerical models. The utilization of both modeling approaches allows a number of pseudocontinuous dispersed phases of spherical wetted agglomerates to be modeled simultaneously. Key effects on the transport of the fibre agglomerates are particle size, density and turbulent dispersion, as well as the relative viscosity of the fluid-fibre mixture.
                                
Resumo:
This article attempts to repair the neglect of the qualitative uses of some and to suggest an explanation which could cover the full range of usage with this determiner - both quantitative and qualitative - showing how a single underlying meaning, modulated by contextual and pragmatic factors, can give rise to the wide variety of messages expressed by some in actual usage. Both the treatment of some as an existential quantifier and the scalar model which views some as evoking a less-than-expected quantity on a pragmatic scale are shown to be incapable of handling the qualitative uses of this determiner. An original analysis of some and the interaction of its meaning with the defining features of the qualitative uses is proposed, extending the discussion as well to the role of focus and the adverbial modifier quite. The crucial semantic feature of some for the explanation of its capacity to express qualitative readings is argued to be non-identification of a referent assumed to be particular. Under the appropriate conditions, this notion can give rise to qualitative denigration (implying it is not even worth the bother to identify the referent) or qualitative appreciation (implying the referent to be so outstanding that it defies identification). The explanation put forward is also shown to cover some's use as an approximator, thereby enhancing its plausibility even further. © Cambridge University Press 2012.
                                
Resumo:
2000 Mathematics Subject Classification: 60J80
                                
Resumo:
Findings on growth regulating activities of the end-product of lipid peroxidation 4-hydroxy-2-nonenal (HNE), which acts as a “second messenger of free radicals”, overlapped with the development of antibodies specific for the aldehyde-protein adducts. These led to qualitative immunochemical determinations of the HNE presence in various pathophysiological processes and to the change of consideration of the aldehyde’s bioactivities from toxicity into cell signalling. Moreover, findings of the HNE-protein adduct in various organs under physiological circumstances support the concept of “oxidative homeostasis”, which implies that oxidative stress and lipid peroxidation are not only pathological but also physiological processes. Reactive aldehydes, at least HNE, could play important role in oxidative homeostasis, while complementary research approaches might reveal the relevance of the aldehydic-protein adducts as major biomarkers of oxidative stress, lipid peroxidation and oxidative homeostasis. Aiming to join efforts in such research activities researchers interacting through the International 4-Hydroxynonenal Club acting within the SFRR-International and through networking projects of the system of the European Cooperation in Science and Technology (COST) carried validation of the methods for lipid peroxidation and further developed the genuine 4-HNE-His ELISA founding quantitative and qualitative methods for detection of 4-HNE-His adducts as valuable tool to study oxidative stress and lipid peroxidation in cell cultures, various organs and tissues and eventually for human plasma and serum analyses [1]. Reference: 1. Weber, Daniela. Lidija, Milkovic. Measurement of HNE-protein adducts in human plasma and serum by ELISA—Comparison of two primary antibodies. Redox Biol. 2013. 226-233.
                                
Resumo:
Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.
                                
Resumo:
Computers have dramatically changed the way we live, conduct business, and deliver education. They have infiltrated the Bahamian public school system to the extent that many educators now feel the need for a national plan. The development of such a plan is a challenging undertaking, especially in developing countries where physical, financial, and human resources are scarce. This study assessed the situation with regard to computers within the Bahamian public school system, and provided recommended guidelines to the Bahamian government based on the results of a survey, the body of knowledge about trends in computer usage in schools, and the country's needs. ^ This was a descriptive study for which an extensive review of literature in areas of computer hardware, software, teacher training, research, curriculum, support services and local context variables was undertaken. One objective of the study was to establish what should or could be relative to the state-of-the-art in educational computing. A survey was conducted involving 201 teachers and 51 school administrators from 60 randomly selected Bahamian public schools. A random stratified cluster sampling technique was used. ^ This study used both quantitative and qualitative research methodologies. Quantitative methods were used to summarize the data about numbers and types of computers, categories of software available, peripheral equipment, and related topics through the use of forced-choice questions in a survey instrument. Results of these were displayed in tables and charts. Qualitative methods, data synthesis and content analysis, were used to analyze the non-numeric data obtained from open-ended questions on teachers' and school administrators' questionnaires, such as those regarding teachers' perceptions and attitudes about computers and their use in classrooms. Also, interpretative methodologies were used to analyze the qualitative results of several interviews conducted with senior public school system's officials. Content analysis was used to gather data from the literature on topics pertaining to the study. ^ Based on the literature review and the data gathered for this study a number of recommendations are presented. These recommendations may be used by the government of the Commonwealth of The Bahamas to establish policies with regard to the use of computers within the public school system. ^
 
                    