887 resultados para Input and outputs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The principal components of classical senile plaques (SP) in Alzheimer's disease (AD) appear to be A4/beta protein and paired helical filaments (PHF). A4 deposits may evolve into classical SP in brain regions vulnerable to the formation of PHF. We have investigated the diatribution of A4 deposits using an immunostain and the neurofibrillary change using the Gallyas stain in various regions of the hippocampus. This region is particularly affected in AD and also has relatively restricted inputs and outputs. In 6 patients we found a significant preponderance of A4 deposits in the adjacent parahippocampal gyrus (PHG) compared with all regions of the hippocampus. However, plaque-like clusters of PHF (Gallyas plaques) were more abundant in the subiculum while neurofibrillary tangles (NFT) were more abundant in the subiculum and region CA1 compared with the PHG and other hippocampal regions. Hence, A4 deposits appear to be concentrated in the region providing a major input into the hippocampus while the neurofibrillary changes are characteristic of the major output areas (subiculum and CA1). Hence, the data suggest that A4 formation and the neurofibrillary changes may occur in regions of the hippocampus that are connected anatomically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With business incubators deemed as a potent infrastructural element for entrepreneurship development, business incubation management practice and performance have received widespread attention. However, despite this surge of interest, scholars have questioned the extent to which business incubation delivers added value. Thus, there is a growing awareness among researchers, practitioners and policy makers of the need for more rigorous evaluation of the business incubation output performance. Aligned to this is an increasing demand for benchmarking business incubation input/process performance and highlighting best practice. This paper offers a business incubation assessment framework, which considers input/process and output performance domains with relevant indicators. This tool adds value on different levels. It has been developed in collaboration with practitioners and industry experts and therefore it would be relevant and useful to business incubation managers. Once a large enough database of completed questionnaires has been populated on an online platform managed by a coordinating mechanism, such as a business incubation membership association, business incubator managers can reflect on their practices by using this assessment framework to learn their relative position vis-à-vis their peers against each domain. This will enable them to align with best practice in this field. Beyond implications for business incubation management practice, this performance assessment framework would also be useful to researchers and policy makers concerned with business incubation management practice and impact. Future large-scale research could test for construct validity and reliability. Also, discriminant analysis could help link input and process indicators with output measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the Becker-Döring model of nucleation with three generalisations; an input of monomer, an input of inhibitor and finally, we allow the monomers to form two morphologies of cluster. We assume size-independent aggregation and fragmentation rates. Initially we consider the problem of constant monomer input and determine the steady-state solution approached in the large-time limit, and the manner in which it is approached. Secondly, in addition to a constant input of monomer we allow a constant input of inhibitor, which prevents clusters growing any larger and this removes them from the kinetics of the process; the inhibitor is consumed in the action of poisoning a cluster. We determine a critical ratio of poison to monomer input below which the cluster concentrations tend to a non-zero steady-state solution and the poison concentration tends to a finite value. Above the critical input ratio, the concentrations of all cluster sizes tend to zero and the poison concentration grows without limit. In both cases the solution in the large-time limit is determined. Finally we consider a model where monomers form two morphologies, but the inhibitor only acts on one morphology. Four cases are identified, depending on the relative poison to monomer input rates and the relative thermodynamic stability. In each case we determine the final cluster distribution and poison concentration. We find that poisoning the less stable cluster type can have a significant impact on the structure of the more stable cluster distribution; a counter-intuitive result. All results are shown to agree with numerical simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most agroecosystems, nitrogen (N) is the most important nutrient limiting plant growth. One management strategy that affects N cycling and N use efficiency (NUE) is conservation agriculture (CA), an agricultural system based on a combination of minimum tillage, crop residue retention and crop rotation. Available results on the optimization of NUE in CA are inconsistent and studies that cover all three components of CA are scarce. Presently, CA is promoted in the Yaqui Valley in Northern Mexico, the country´s major wheat-producing area in which from 1968 to 1995, fertilizer application rates for the cultivation of irrigated durum wheat (Triticum durum L.) at 6 t ha-1 increased from 80 to 250 kg ha-1, demonstrating the high intensification potential in this region. Given major knowledge gaps on N availability in CA this thesis summarizes the current knowledge of N management in CA and provides insights in the effects of tillage practice, residue management and crop rotation on wheat grain quality and N cycling. Major aims of the study were to identify N fertilizer application strategies that improve N use efficiency and reduce N immobilization in CA with the ultimate goal to stabilize cereal yields, maintain grain quality, minimize N losses into the environment and reduce farmers’ input costs. Soil physical and chemical properties in CA were measured and compared with those in conventional systems and permanent beds with residue burning focusing on their relationship to plant N uptake and N cycling in the soil and how they are affected by tillage and N fertilizer timing, method and doses. For N fertilizer management, we analyzed how placement, time and amount of N fertilizer influenced yield and quality parameters of durum and bread wheat in CA systems. Overall, grain quality parameters, in particular grain protein concentration decreased with zero-tillage and increasing amount of residues left on the field compared with conventional systems. The second part of the dissertation provides an overview of applied methodologies to measure NUE and its components. We evaluated the methodology of ion exchange resin cartridges under irrigated, intensive agricultural cropping systems on Vertisols to measure nitrate leaching losses which through drainage channels ultimately end up in the Sea of Cortez where they lead to algae blooming. A throughout analysis of N inputs and outputs was conducted to calculate N balances in three different tillage-straw systems. As fertilizer inputs are high, N balances were positive in all treatments indicating the risk of N leaching or volatilization during or in subsequent cropping seasons and during heavy rain fall in summer. Contrary to common belief, we did not find negative effects of residue burning on soil nutrient status, yield or N uptake. A labeled fertilizer experiment with urea 15N was implemented in micro-plots to measure N fertilizer recovery and the effects of residual fertilizer N in the soil from summer maize on the following winter crop wheat. Obtained N fertilizer recovery rates for maize grain were with an average of 11% very low for all treatments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Economics education research studies conducted in the UK, USA and Australia to investigate the effects of learning inputs on academic performance have been dominated by the input-output model (Shanahan and Meyer, 2001). In the Student Experience of Learning framework, however, the link between learning inputs and outputs is mediated by students' learning approaches which in turn are influenced by their perceptions of the learning contexts (Evans, Kirby, & Fabrigar, 2003). Many learning inventories such as Biggs' Study Process Questionnaires and Entwistle and Ramsden' Approaches to Study Inventory have been designed to measure approaches to academic learning. However, there is a limitation to using generalised learning inventories in that they tend to aggregate different learning approaches utilised in different assessments. As a result, important relationships between learning approaches and learning outcomes that exist in specific assessment context(s) will be missed (Lizzio, Wilson, & Simons, 2002). This paper documents the construction of an assessment specific instrument to measure learning approaches in economics. The post-dictive validity of the instrument was evaluated by examining the association of learning approaches to students' perceived assessment demand in different assessment contexts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present the design and deployment results for PosNet - a large-scale, long-duration sensor network that gathers summary position and status information from mobile nodes. The mobile nodes have a fixed-sized memory buffer to which position data is added at a constant rate, and from which data is downloaded at a non-constant rate. We have developed a novel algorithm that performs online summarization of position data within the buffer, where the algorithm naturally accommodates data input and output rate mismatch, and also provides a delay-tolerant approach to data transport. The algorithm has been extensively tested in a large-scale long-duration cattle monitoring and control application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Intelligible and accurate risk-based decision-making requires a complex balance of information from different sources, appropriate statistical analysis of this information and consequent intelligent inference and decisions made on the basis of these analyses. Importantly, this requires an explicit acknowledgement of uncertainty in the inputs and outputs of the statistical model. The aim of this paper is to progress a discussion of these issues in the context of several motivating problems related to the wider scope of agricultural production. These problems include biosecurity surveillance design, pest incursion, environmental monitoring and import risk assessment. The information to be integrated includes observational and experimental data, remotely sensed data and expert information. We describe our efforts in addressing these problems using Bayesian models and Bayesian networks. These approaches provide a coherent and transparent framework for modelling complex systems, combining the different information sources, and allowing for uncertainty in inputs and outputs. While the theory underlying Bayesian modelling has a long and well established history, its application is only now becoming more possible for complex problems, due to increased availability of methodological and computational tools. Of course, there are still hurdles and constraints, which we also address through sharing our endeavours and experiences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In late 2007, Gold Coast City Council libraries embarked on an online library project, designed to ramp up libraries’ online services to customers. As part of this project, the Young People’s team identified a need to connect with youth aged 12 to 16 in the online environment, in order to create a direct channel of communication with this market segment and encourage them to engage with the library. Blogging was identified as an appropriate means of communicating with both current and potential library customers from this age group. The Young People’s team consequently prepared a concept plan for a youth blog for launch in Children’s Book Week 2008 and are working towards development of management and administrative models and documentation and implementation of the blog itself. While many libraries have been quick to take up Web 2.0-style services, there has been little formal publication about the successes (or failures) of this type of project. Likewise, few libraries have published about the planning, management, and administration of such services. The youth blog currently in development at Gold Coast City Council libraries will be supported by a robust planning phase and will be rigorously evaluated as part of the project. This paper will report on the project (its aims, objectives and outputs), the planning process, and the evaluation activities and outcomes.