886 resultados para Optimal test set
Resumo:
In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
Differential evolution is an optimisation technique that has been successfully employed in various applications. In this paper, we apply differential evolution to the problem of extracting the optimal colours of a colour map for quantised images. The choice of entries in the colour map is crucial for the resulting image quality as it forms a look-up table that is used for all pixels in the image. We show that differential evolution can be effectively employed as a method for deriving the entries in the map. In order to optimise the image quality, our differential evolution approach is combined with a local search method that is guaranteed to find the local optimal colour map. This hybrid approach is shown to outperform various commonly used colour quantisation algorithms on a set of standard images. Copyright © 2010 Inderscience Enterprises Ltd.
Resumo:
A tanulmány a mikroökonómia eszközrendszerét és a hazai gépjárműpiac 2013-as adatait segítségül hívva egy új módszert mutat be az ármeghatározás területén. A kutatás központi kérdése az, hogy hol található az a pont, amikor a fogyasztó elégedett a kínált minőséggel és árral – lehetőleg megfelelő időben – és a vállalat is elégedett a megszerzett profittal. A tanulmányban tehát az ármeghatározás során központi szerepet játszik a minőség és az idő, mint értékteremtő funkció. Az elemzés egyik legfőbb következtetése, hogy a profitmaximumból levezetett optimális ár a minőség és az idő különböző paraméterei mellett meghatározható. A módszer segítségével a vállalatok közgazdasági eszközrendszer segítségével kapnak egy új szemléletet működési paramétereik és egyben versenyprioritásaik (ár, költség, minőségszint, idő) felállításához. _____ The study points to a new method for determining price with the tools of microeconomics and data of the Hungarian car market. The focus of the research is on where to find the point where the consumer is satisfied with the quality and price offered – preferably right time – and the company is satisfied with the profit achieved. In this study, therefore, in setting prices plays a central role the quality and time as a value-added feature. One of the main conclusions of the analysis is that the optimal price can be determined by various parameters of the quality and time. The method of using the economic tools help companies get a new perspective and to set up their optimal operating parameters (price, cost, quality level, time).
Resumo:
The purpose of this research was to explore the effects of a reform that took place in an elementary school during 2000/2001 as a result of a failure rating on the Florida Comprehensive Assessment Test on the structure and the personnel of the organization. ^ The exploration took place over a period of 10 months starting in August 2000 until June 2001. It focused on the effect of the failure rating on the: (a) structure and operation of the school; (b) morale, beliefs, behaviors, and daily lives of teachers and the principal; and (c) the effect of the reform effort on the leadership style of the principal, whether she became a transactional or a transformative leader. ^ The researcher assumed the role of a participant observer. Data sources were her personal recollections of major events that took place during the year of the reform, interviews, observations, and school documents. The sample included 15 teachers present during the time of the reform. Ten taught second through fifth grade. The remaining five participants were the music teacher, the counselor, and the writing, reading and technology specialists. Together they represented the instructional team or represented special education areas. ^ The findings indicated that the reform effort had an effect on the structure and the operation of the school. The changes included reorganization of the physical set up, changes in curriculum and instruction, changes in the means of communication among the staff, and the addition of new staff members including an official agent of change. The reform had a greater effect on the daily lives of teachers and their morale than on their beliefs and behaviors. Teachers reported that during the effort their daily lives were stressful and their morale very low due to the enormous expectations that they had to meet. On the other hand, the reform effort had a positive effect on the daily life, morale, beliefs, and behaviors of the principal. It energized her. She spoke positively about the change. She functioned as an effective, positive, resilient transactional leader who did what was necessary in order to enable the teachers to cope with the complex situation. ^
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Design optimization of modern machine drive systems for maximum fault tolerant and optimal operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^
Resumo:
The purpose of this research was to explore the effects of a reform that took place in an elementary school during 2000/2001 as a result of a failure rating on the Florida Comprehensive Assessment Test on the structure and the personnel of the organization. The exploration took place over a period of 10 months starting in August 2000 until June 2001. It focused on the effect of the failure rating on the: (a) structure and operation of the school; (b) morale, beliefs, behaviors, and daily lives of teachers and the principal; and (c) the effect of the reform effort on the leadership style of the principal, whether she became a transactional or a transformative leader. The researcher assumed the role of a participant observer. Data sources were her personal recollections of major events that took place during the year of the reform, interviews, observations, and school documents. The sample included 15 teachers present during the time of the reform. Ten taught second through fifth grade. The remaining five participants were the music teacher, the counselor, and the writing, reading and technology specialists. Together they represented the instructional team or represented special education areas. The findings indicated that the reform effort had an effect on the structure and the operation of the school. The changes included reorganization of the physical set up, changes in curriculum and instruction, changes in the means of communication among the staff, and the addition of new staff members including an official agent of change. The reform had a greater effect on the daily lives of teachers and their morale than on their beliefs and behaviors. Teachers reported that during the effort their daily lives were stressful and their morale very low due to the enormous expectations that they had to meet. On the other hand, the reform effort had a positive effect on the daily life, morale, beliefs, and behaviors of the principal. It energized her. She spoke positively about the change. She functioned as an effective, positive, resilient transactional leader who did what was necessary in order to enable the teachers to cope with the complex situation.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.
Resumo:
An abundance of research in the social sciences has demonstrated a persistent bias against nonnative English speakers (Giles & Billings, 2004; Gluszek & Dovidio, 2010). Yet, organizational scholars have only begun to investigate the underlying mechanisms that drive the bias against nonnative speakers and subsequently design interventions to mitigate these biases. In this dissertation, I offer an integrative model to organize past explanations for accent-based bias into a coherent framework, and posit that nonnative accents elicit social perceptions that have implications at the personal, relational, and group level. I also seek to complement the existing emphasis on main effects of accents, which focuses on the general tendency to discriminate against those with accents, by examining moderators that shed light on the conditions under which accent-based bias is most likely to occur. Specifically, I explore the idea that people’s beliefs about the controllability of accents can moderate their evaluations toward nonnative speakers, such that those who believe that accents can be controlled are more likely to demonstrate a bias against nonnative speakers. I empirically test my theoretical model in three studies in the context of entrepreneurial funding decisions. Results generally supported the proposed model. By examining the micro foundations of accent-based bias, the ideas explored in this dissertation set the stage for future research in an increasingly multilingual world.
Resumo:
Abstract Professional language assessment is a new concept that has great potential to benefit Internationally Educated Professionals and the communities they serve. This thesis reports on a qualitative study that examined the responses of 16 Canadian English Language Benchmark Assessment for Nurses (CELBAN) test-takers on the topic of their perceptions of the CELBAN test-taking experience in Ontario in the winter of 2015. An Ontario organization involved in registering participants distributed an e-mail through their listserv. Thematic analyses of focus group and interview transcripts identified 7 themes from the data. These themes were used to inform conclusions to the following questions: (1) How do IENs characterize their assessment experience? (2) How do IENs describe the testing constructs measured by the CELBAN? (3) What, if any, potential sources of construct irrelevant variance (CIV) do the test-takers describe based on their assessment experience? (4) Do IENs feel that the CELBAN tasks provide a good reflection of the types of communicative tasks required of a nurse? Overall, participants reported positive experiences with the CELBAN as an assessment of their language skills, and noted some instances in which they felt some factors external to the assessment impacted their demonstration of their knowledge and skill. Lastly, some test-takers noted the challenge of completing the CELBAN where the types of communicative nursing tasks included in the assessment differed from nursing tasks typical of an IENs country or origin. The findings are discussed in relation to literature on high-stakes large-scale assessment and IEPs, and a set of recommendations are offered to future CELBAN administration. These recommendations include (1) the provision of a webpage listing all licensure requirements (2) monitoring of CELBAN location and dates in relation to the wider certification timeline for applicants (3) The provision of additional CELBAN preparatory materials (4) Minor changes to the CELBAN administrative protocols. Given that the CELBAN is a relatively new assessment format and its widespread use for high-stakes decisions (a component of nursing certification and licensure), research validating IEN-test-taker responses to construct representation and construct irrelevant variance is critical to our understanding of the role of competency testing for IENs.
Resumo:
A sufficiently complex set of molecules, if subject to perturbation, will self-organise and show emergent behaviour. If such a system can take on information it will become subject to natural selection. This could explain how self-replicating molecules evolved into life and how intelligence arose. A pivotal step in this evolutionary process was of course the emergence of the eukaryote and the advent of the mitochondrion, which both enhanced energy production per cell and increased the ability to process, store and utilise information. Recent research suggest that from its inception life embraced quantum effects such as “tunnelling” and “coherence” while competition and stressful conditions provided a constant driver for natural selection. We believe that the biphasic adaptive response to stress described by hormesis – a process that captures information to enable adaptability, is central to this whole process. Critically, hormesis could improve mitochondrial quantum efficiency, improving the ATP/ROS ratio, while inflammation, which is tightly associated with the aging process, might do the opposite. This all suggests that to achieve optimal health and healthy ageing, one has to sufficiently stress the system to ensure peak mitochondrial function, which itself could reflect selection of optimum efficiency at the quantum level.
Resumo:
Introduction
Evaluating quality of palliative day services is essential for assessing care across diverse settings, and for monitoring quality improvement approaches.
Aim
To develop a set of quality indicators for assessment of all aspects (structure, process and outcome) of care in palliative day services.
Methods
Using a modified version of the RAND/UCLA appropriateness method (Fitch et al., 2001), a multidisciplinary panel of 16 experts independently completed a survey rating the appropriateness of 182 potential quality indicators previously identified during a systematic evidence review. Panel members then attended a one day, face-to-face meeting where indicators were discussed and subsequently re-rated. Panel members were also asked to rate the feasibility and necessity of measuring each indicator.
Results
71 indicators classified as inappropriate during the survey were removed based on median appropriateness ratings and level of agreement. Following the panel discussions, a further 60 were removed based on appropriateness and feasibility ratings, level of agreement and assessment of necessity. Themes identified during the panel discussion and findings of the evidence review were used to translate the remaining 51 indicators into a final set of 27.
Conclusion
The final indicator set included information on rationale and supporting evidence, methods of assessment, risk adjustment, and recommended performance levels. Further implementation work will test the suitability of this ‘toolkit’ for measurement and benchmarking. The final indicator set provides the basis for standardised assessment of quality across services, including care delivered in community and primary care settings.
Reference
• Fitch K, Bernstein SJ, Aguilar MD, et al. The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica, CA: RAND Corporation; 2001. http://www.rand.org/pubs/monograph_reports/MR1269
Resumo:
This paper develops an integrated optimal power flow (OPF) tool for distribution networks in two spatial scales. In the local scale, the distribution network, the natural gas network, and the heat system are coordinated as a microgrid. In the urban scale, the impact of natural gas network is considered as constraints for the distribution network operation. The proposed approach incorporates unbalance three-phase electrical systems, natural gas systems, and combined cooling, heating, and power systems. The interactions among the above three energy systems are described by energy hub model combined with components capacity constraints. In order to efficiently accommodate the nonlinear constraint optimization problem, particle swarm optimization algorithm is employed to set the control variables in the OPF problem. Numerical studies indicate that by using the OPF method, the distribution network can be economically operated. Also, the tie-line power can be effectively managed.