924 resultados para Input and outputs
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.
Resumo:
With business incubators deemed as a potent infrastructural element for entrepreneurship development, business incubation management practice and performance have received widespread attention. However, despite this surge of interest, scholars have questioned the extent to which business incubation delivers added value. Thus, there is a growing awareness among researchers, practitioners and policy makers of the need for more rigorous evaluation of the business incubation output performance. Aligned to this is an increasing demand for benchmarking business incubation input/process performance and highlighting best practice. This paper offers a business incubation assessment framework, which considers input/process and output performance domains with relevant indicators. This tool adds value on different levels. It has been developed in collaboration with practitioners and industry experts and therefore it would be relevant and useful to business incubation managers. Once a large enough database of completed questionnaires has been populated on an online platform managed by a coordinating mechanism, such as a business incubation membership association, business incubator managers can reflect on their practices by using this assessment framework to learn their relative position vis-à-vis their peers against each domain. This will enable them to align with best practice in this field. Beyond implications for business incubation management practice, this performance assessment framework would also be useful to researchers and policy makers concerned with business incubation management practice and impact. Future large-scale research could test for construct validity and reliability. Also, discriminant analysis could help link input and process indicators with output measures.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Resumo:
We investigate the Becker-Döring model of nucleation with three generalisations; an input of monomer, an input of inhibitor and finally, we allow the monomers to form two morphologies of cluster. We assume size-independent aggregation and fragmentation rates. Initially we consider the problem of constant monomer input and determine the steady-state solution approached in the large-time limit, and the manner in which it is approached. Secondly, in addition to a constant input of monomer we allow a constant input of inhibitor, which prevents clusters growing any larger and this removes them from the kinetics of the process; the inhibitor is consumed in the action of poisoning a cluster. We determine a critical ratio of poison to monomer input below which the cluster concentrations tend to a non-zero steady-state solution and the poison concentration tends to a finite value. Above the critical input ratio, the concentrations of all cluster sizes tend to zero and the poison concentration grows without limit. In both cases the solution in the large-time limit is determined. Finally we consider a model where monomers form two morphologies, but the inhibitor only acts on one morphology. Four cases are identified, depending on the relative poison to monomer input rates and the relative thermodynamic stability. In each case we determine the final cluster distribution and poison concentration. We find that poisoning the less stable cluster type can have a significant impact on the structure of the more stable cluster distribution; a counter-intuitive result. All results are shown to agree with numerical simulation.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
In most agroecosystems, nitrogen (N) is the most important nutrient limiting plant growth. One management strategy that affects N cycling and N use efficiency (NUE) is conservation agriculture (CA), an agricultural system based on a combination of minimum tillage, crop residue retention and crop rotation. Available results on the optimization of NUE in CA are inconsistent and studies that cover all three components of CA are scarce. Presently, CA is promoted in the Yaqui Valley in Northern Mexico, the country´s major wheat-producing area in which from 1968 to 1995, fertilizer application rates for the cultivation of irrigated durum wheat (Triticum durum L.) at 6 t ha-1 increased from 80 to 250 kg ha-1, demonstrating the high intensification potential in this region. Given major knowledge gaps on N availability in CA this thesis summarizes the current knowledge of N management in CA and provides insights in the effects of tillage practice, residue management and crop rotation on wheat grain quality and N cycling. Major aims of the study were to identify N fertilizer application strategies that improve N use efficiency and reduce N immobilization in CA with the ultimate goal to stabilize cereal yields, maintain grain quality, minimize N losses into the environment and reduce farmers’ input costs. Soil physical and chemical properties in CA were measured and compared with those in conventional systems and permanent beds with residue burning focusing on their relationship to plant N uptake and N cycling in the soil and how they are affected by tillage and N fertilizer timing, method and doses. For N fertilizer management, we analyzed how placement, time and amount of N fertilizer influenced yield and quality parameters of durum and bread wheat in CA systems. Overall, grain quality parameters, in particular grain protein concentration decreased with zero-tillage and increasing amount of residues left on the field compared with conventional systems. The second part of the dissertation provides an overview of applied methodologies to measure NUE and its components. We evaluated the methodology of ion exchange resin cartridges under irrigated, intensive agricultural cropping systems on Vertisols to measure nitrate leaching losses which through drainage channels ultimately end up in the Sea of Cortez where they lead to algae blooming. A throughout analysis of N inputs and outputs was conducted to calculate N balances in three different tillage-straw systems. As fertilizer inputs are high, N balances were positive in all treatments indicating the risk of N leaching or volatilization during or in subsequent cropping seasons and during heavy rain fall in summer. Contrary to common belief, we did not find negative effects of residue burning on soil nutrient status, yield or N uptake. A labeled fertilizer experiment with urea 15N was implemented in micro-plots to measure N fertilizer recovery and the effects of residual fertilizer N in the soil from summer maize on the following winter crop wheat. Obtained N fertilizer recovery rates for maize grain were with an average of 11% very low for all treatments.
Resumo:
The search for alternatives to fossil fuels is boosting interest in biodiesel production. Among the crops used to produce biodiesel, palm trees stand out due to their high productivity and positive energy balance. This work assesses life cycle emissions and the energy balance of biodiesel production from palm oil in Brazil. The results are compared through a meta-analysis to previous published studies: Wood and Corley (1991) [Wood BJ, Corley RH. The energy balance of oil palm cultivation. In: PORIM intl. palm oil conference agriculture; 1991.], Malaysia; Yusoff and Hansen (2005) [Yusoff S. Hansen SB. Feasibility study of performing an life cycle assessment on crude palm oil production in Malaysia. International Journal of Life Cycle Assessment 2007;12:50-8], Malaysia; Angarita et al. (2009) [Angarita EE, Lora EE, Costa RE, Torres EA. The energy balance in the palm oil-derived methyl ester (PME) life cycle for the cases in Brazil and Colombia. Renewable Energy 2009;34:2905-13], Colombia; Pleanjai and Gheewala (2009) [Pleanjai S. Gheewala SH. Full chain energy analysis of biodiesel production from palm oil in Thailand. Applied Energy 2009;86:S209-14], Thailand; and Yee et al. (2009) [Yee KF, Tan KT, Abdullah AZ, Lee la. Life cycle assessment of palm biodiesel: revealing facts and benefits for sustainability. Applied Energy 2009;86:S189-96], Malaysia. In our study, data for the agricultural phase, transport, and energy content of the products and co-products were obtained from previous assessments done in Brazil. The energy intensities and greenhouse gas emission factors were obtained from the Simapro 7.1.8. software and other authors. These factors were applied to the inputs and outputs listed in the selected studies to render them comparable. The energy balance for our study was 1:5.37. In comparison the range for the other studies is between 1:3.40 and 1:7.78. Life cycle emissions determined in our assessment resulted in 1437 kg CO(2)e/ha, while our analysis based on the information provided by other authors resulted in 2406 kg CO(2)e/ha, on average. The Angarita et al. (2009) [Angarita EE, Lora EE, Costa RE, Torres EA. The energy balance in the palm oil-derived methyl ester (PME) life cycle for the cases in Brazil and Colombia. Renewable Energy 2009:34:2905-13] study does not report emissions. When compared to diesel on a energy basis, avoided emissions due to the use of biodiesel account for 80 g CO(2)e/MJ. Thus, avoided life Cycle emissions associated with the use of biodiesel yield a net reduction of greenhouse gas emissions. We also assessed the carbon balance between a palm tree plantation, including displaced emissions from diesel, and a natural ecosystem. Considering the carbon balance outcome plus life cycle emissions the payback time for a tropical forest is 39 years. The result published by Gibbs et al. (2008) [Gibbs HK, Johnston M, Foley JA, Holloway T, Monfreda C, Ramankutty N, et al., Carbon payback times for crop-based biofuel expansion in the tropics: the effects of changing yield and technology. Environmental Research Letters 2008;3:10], which ignores life cycle emissions, determined a payback range for biodiesel production between 30 and 120 years. Crown Copyright (C) 2010 Published by Elsevier Ltd. All rights reserved.
Resumo:
In the MPC literature, stability is usually assured under the assumption that the state is measured. Since the closed-loop system may be nonlinear because of the constraints, it is not possible to apply the separation principle to prove global stability for the Output feedback case. It is well known that, a nonlinear closed-loop system with the state estimated via an exponentially converging observer combined with a state feedback controller can be unstable even when the controller is stable. One alternative to overcome the state estimation problem is to adopt a non-minimal state space model, in which the states are represented by measured past inputs and outputs [P.C. Young, M.A. Behzadi, C.L. Wang, A. Chotai, Direct digital and adaptative control by input-output, state variable feedback pole assignment, International journal of Control 46 (1987) 1867-1881; C. Wang, P.C. Young, Direct digital control by input-output, state variable feedback: theoretical background, International journal of Control 47 (1988) 97-109]. In this case, no observer is needed since the state variables can be directly measured. However, an important disadvantage of this approach is that the realigned model is not of minimal order, which makes the infinite horizon approach to obtain nominal stability difficult to apply. Here, we propose a method to properly formulate an infinite horizon MPC based on the output-realigned model, which avoids the use of an observer and guarantees the closed loop stability. The simulation results show that, besides providing closed-loop stability for systems with integrating and stable modes, the proposed controller may have a better performance than those MPC controllers that make use of an observer to estimate the current states. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A novel setup for imaging and interferometry through reflection holography with Bi12TiPO20(BTO) sillenite photorefractive crystals is proposed. A variation of the lensless Denisiuk arrangement was developed resulting in a compact, robust and simple interferometer. A red He-Ne laser was used as light source and the holographic recording occurred by diffusion with the grating vector parallel to the crystal [0 0 1]-axis. In order to enhance the holographic image quality and reduce noise a polarizing beam splitter (PBS) was positioned at the BTO input and the crystal was tilted around the [0 0 1]-axis. This enabled the orthogonally polarized transmission and diffracted beams to be separated by the PBS, providing the holographic image only. The possibility of performing deformation and strain analysis as well as vibration measurement of small objects was demonstrated. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The effect that the difficulty of the discrimination between task-relevant and task-irrelevant stimuli has on the relationship between skin conductance orienting and secondary task reaction time (RT) was examined. Participants (N = 72) counted the number of longer-than-usual presentations of one shape (task-relevant) and ignored presentations of another shape (task-irrelevant). The difficulty of discriminating between the two shapes varied across three groups (low, medium, and high difficulty). Simultaneous with the primary counting task, participants performed a secondary RT task to acoustic probes presented 50, 150, and 2000 ms following shape onset. Skin conductance orienting was larger, and secondary RT at the 2000 ms probe position was slower during task-relevant shapes than during task-irrelevant shapes in the low-difficulty group. This difference declined as the discrimination difficulty was increased, such that there was no difference in the high-difficulty group. Secondary RT was slower during task-irrelevant shapes than during task-relevant shapes only in the medium-difficulty group-and only at the 150 ms probe position in the first half of the experiment. The close relationship between autonomic orienting and secondary RT at the 2000 ms probe position suggests that orienting reflects the resource allocation that results from the number of matching features between a stimulus input and a mental representation primed as significant.
Resumo:
Low-grade inflammation adversely influences metabolism and cardiovascular prognosis, nevertheless increased intake of fruits and vegetables has rarely been studied in this context. Objective: In a prospective controlled study, the effect on C-reactive protein (CRP) levels was assessed. Methodology: Sixty consecutive women undergoing cosmetic abdominal surgery were instructed to consume six servings each of fruits and vegetables during the first postoperative month. Detailed 24h interviewer-administered dietary recall was conducted at baseline and at the end of the study, with weekly returns to monitor unscheduled dietary changes and compliance with the protocol. Variance (ANOVA) and covariance (ANCOVA) were evaluated to confirm significance and minimize confounding variables. Results: No differences concerning age (42.2 +/- 5.3 vs 41.1 +/- 6.0 years) or BMI (25.5 +/- 3.1 vs 25.0 +/- 3.0 kg/m(2)) occurred. Ingestion of fruits increased to approximately 5.2 vs 3.9 and of vegetables 5.9 vs 3.4 servings/ day, respectively. CRP decreased more conspicuously in the treated group (P = 0.028), and correlation between vitamin C input and CRP in supplemented participants was demonstrated (P = 0.014). Conclusions: Higher intake of antioxidant foods was feasible, and an antiinflammaotory effect occurred. Further studies with longer administration and follow-up period are recommended.
Resumo:
Eukaryotic phenotypic diversity arises from multitasking of a core proteome of limited size. Multitasking is routine in computers, as well as in other sophisticated information systems, and requires multiple inputs and outputs to control and integrate network activity. Higher eukaryotes have a mosaic gene structure with a dual output, mRNA (protein-coding) sequences and introns, which are released from the pre-mRNA by posttranscriptional processing. Introns have been enormously successful as a class of sequences and comprise up to 95% of the primary transcripts of protein-coding genes in mammals. In addition, many other transcripts (perhaps more than half) do not encode proteins at all, but appear both to be developmentally regulated and to have genetic function. We suggest that these RNAs (eRNAs) have evolved to function as endogenous network control molecules which enable direct gene-gene communication and multitasking of eukaryotic genomes. Analysis of a range of complex genetic phenomena in which RNA is involved or implicated, including co-suppression, transgene silencing, RNA interference, imprinting, methylation, and transvection, suggests that a higher-order regulatory system based on RNA signals operates in the higher eukaryotes and involves chromatin remodeling as well as other RNA-DNA, RNA-RNA, and RNA-protein interactions. The evolution of densely connected gene networks would be expected to result in a relatively stable core proteome due to the multiple reuse of components, implying,that cellular differentiation and phenotypic variation in the higher eukaryotes results primarily from variation in the control architecture. Thus, network integration and multitasking using trans-acting RNA molecules produced in parallel with protein-coding sequences may underpin both the evolution of developmentally sophisticated multicellular organisms and the rapid expansion of phenotypic complexity into uncontested environments such as those initiated in the Cambrian radiation and those seen after major extinction events.
Resumo:
It is common for a real-time system to contain a nonterminating process monitoring an input and controlling an output. Hence, a real-time program development method needs to support nonterminating repetitions. In this paper we develop a general proof rule for reasoning about possibly nonterminating repetitions. The rule makes use of a Floyd-Hoare-style loop invariant that is maintained by each iteration of the repetition, a Jones-style relation between the pre- and post-states on each iteration, and a deadline specifying an upper bound on the starting time of each iteration. The general rule is proved correct with respect to a predicative semantics. In the case of a terminating repetition the rule reduces to the standard rule extended to handle real time. Other special cases include repetitions whose bodies are guaranteed to terminate, nonterminating repetitions with the constant true as a guard, and repetitions whose termination is guaranteed by the inclusion of a fixed deadline. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
As a component of archaeological investigations on the central Queensland coast, a series of five marine shell specimens live-collected between A.D. 1904 and A.D. 1929 and 11 shell/ charcoal paired samples from archaeological contexts were radiocarbon dated to determine local DeltaR values. The object of the study was to assess the potential influence of localized variation in marine reservoir effect in accurately determining the age of marine and estuarine shell from archaeological deposits in the area. Results indicate that the routinely applied DeltaR value of -5 +/- 35 for northeast Australia is erroneously calculated. The determined values suggest a minor revision to Reimer and Reimer's (2000) recommended value for northeast Australia from DeltaR = +11 +/- 5 to + 12 +/- 7, and specifically for central Queensland to DeltaR = +10 +/- 7, for near-shore open marine environments. In contrast, data obtained from estuarine shell/charcoal pairs demonstrate a general lack of consistency, suggesting estuary-specific patterns of variation in terrestrial carbon input and exchange with the open ocean. Preliminary data indicate that in some estuaries, at some time periods, a DeltaR value of more than - 155 +/- 55 may be appropriate, In estuarine contexts in central Queensland, a localized estuary-specific correction factor is recommended to account for geographical and temporal variation in C-14 activity. (C) 2002 Wiley Periodicals.