985 resultados para Computers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to manipulate small fluid droplets, colloidal particles and single cells with the precision and parallelization of modern-day computer hardware has profound applications for biochemical detection, gene sequencing, chemical synthesis and highly parallel analysis of single cells. Drawing inspiration from general circuit theory and magnetic bubble technology, here we demonstrate a class of integrated circuits for executing sequential and parallel, timed operations on an ensemble of single particles and cells. The integrated circuits are constructed from lithographically defined, overlaid patterns of magnetic film and current lines. The magnetic patterns passively control particles similar to electrical conductors, diodes and capacitors. The current lines actively switch particles between different tracks similar to gated electrical transistors. When combined into arrays and driven by a rotating magnetic field clock, these integrated circuits have general multiplexing properties and enable the precise control of magnetizable objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Risk assessment with a thorough family health history is recommended by numerous organizations and is now a required component of the annual physical for Medicare beneficiaries under the Affordable Care Act. However, there are several barriers to incorporating robust risk assessments into routine care. MeTree, a web-based patient-facing health risk assessment tool, was developed with the aim of overcoming these barriers. In order to better understand what factors will be instrumental for broader adoption of risk assessment programs like MeTree in clinical settings, we obtained funding to perform a type III hybrid implementation-effectiveness study in primary care clinics at five diverse healthcare systems. Here, we describe the study's protocol. METHODS/DESIGN: MeTree collects personal medical information and a three-generation family health history from patients on 98 conditions. Using algorithms built entirely from current clinical guidelines, it provides clinical decision support to providers and patients on 30 conditions. All adult patients with an upcoming well-visit appointment at one of the 20 intervention clinics are eligible to participate. Patient-oriented risk reports are provided in real time. Provider-oriented risk reports are uploaded to the electronic medical record for review at the time of the appointment. Implementation outcomes are enrollment rate of clinics, providers, and patients (enrolled vs approached) and their representativeness compared to the underlying population. Primary effectiveness outcomes are the percent of participants newly identified as being at increased risk for one of the clinical decision support conditions and the percent with appropriate risk-based screening. Secondary outcomes include percent change in those meeting goals for a healthy lifestyle (diet, exercise, and smoking). Outcomes are measured through electronic medical record data abstraction, patient surveys, and surveys/qualitative interviews of clinical staff. DISCUSSION: This study evaluates factors that are critical to successful implementation of a web-based risk assessment tool into routine clinical care in a variety of healthcare settings. The result will identify resource needs and potential barriers and solutions to implementation in each setting as well as an understanding potential effectiveness. TRIAL REGISTRATION: NCT01956773.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the analysis of industrial processes, there is an increasing emphasis on systems governed by interacting continuum phenomena. Mathematical models of such multi-physics processes can only be achieved for practical simulations through computational solution procedures—computational mechanics. Examples of such multi-physics systems in the context of metals processing are used to explore some of the key issues. Finite-volume methods on unstructured meshes are proposed as a means to achieve efficient rapid solutions to such systems. Issues associated with the software design, the exploitation of high performance computers, and the concept of the virtual computational-mechanics modelling laboratory are also addressed in this context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pragmatics of 'vegetarian' and 'carnivorous' exhibits an asymmetry that we put in evidence by analyzing a newspaper report about vegetarian dog-owners imposing a vegetarian diet on their pets. More fundamental is the problem of partonomy versus containment, for which we attempt a naive but formal analysis applied to ingestion and the food chain, an issue we derive from the same text analyzed. Our formal tools belong in commonsense modelling, a domain of artificial intelligence related to extra-linguistic knowledge and pragmatics. We first provide an interpretation of events analyzed, and express it graphically in a semantic-network related representation, and propose an alternative that we express in terms of a modal logic, avoiding the full representational power of Hayes's "ontology for liquids".

Relevância:

10.00% 10.00%

Publicador:

Resumo:

FUELCON is an expert system in nuclear engineering. Its task is optimized refueling-design, which is crucial to keep down operation costs at a plant. FUELCON proposes sets of alternative configurations of fuel-allocation; the fuel is positioned in a grid representing the core of a reactor. The practitioner of in-core fuel management uses FUELCON to generate a reasonably good configuration for the situation at hand. The domain expert, on the other hand, resorts to the system to test heuristics and discover new ones, for the task described above. Expert use involves a manual phase of revising the ruleset, based on performance during previous iterations in the same session. This paper is concerned with a new phase: the design of a neural component to carry out the revision automatically. Such an automated revision considers previous performance of the system and uses it for adaptation and learning better rules. The neural component is based on a particular schema for a symbolic to recurrent-analogue bridge, called NIPPL, and on the reinforcement learning of neural networks for the adaptation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies two models of two-stage processing with no-wait in process. The first model is the two-machine flow shop, and the other is the assembly model. For both models we consider the problem of minimizing the makespan, provided that the setup and removal times are separated from the processing times. Each of these scheduling problems is reduced to the Traveling Salesman Problem (TSP). We show that, in general, the assembly problem is NP-hard in the strong sense. On the other hand, the two-machine flow shop problem reduces to the Gilmore-Gomory TSP, and is solvable in polynomial time. The same holds for the assembly problem under some reasonable assumptions. Using these and existing results, we provide a complete complexity classification of the relevant two-stage no-wait scheduling models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classical Purcell's vector method, for the construction of solutions to dense systems of linear equations is extended to a flexible orthogonalisation procedure. Some properties are revealed of the orthogonalisation procedure in relation to the classical Gauss-Jordan elimination with or without pivoting. Additional properties that are not shared by the classical Gauss-Jordan elimination are exploited. Further properties related to distributed computing are discussed with applications to panel element equations in subsonic compressible aerodynamics. Using an orthogonalisation procedure within panel methods enables a functional decomposition of the sequential panel methods and leads to a two-level parallelism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quasi-Newton methods are applied to solve interface problems which arise from domain decomposition methods. These interface problems are usually sparse systems of linear or nonlinear equations. We are interested in applying these methods to systems of linear equations where we are not able or willing to calculate the Jacobian matrices as well as to systems of nonlinear equations resulting from nonlinear elliptic problems in the context of domain decomposition. Suitability for parallel implementation of these algorithms on coarse-grained parallel computers is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A defect equation for the coupling of nonlinear subproblems defined in nonoverlapped subdomains arise in domain decomposition methods is presented. Numerical solutions of defect equations by means of quasi-Newton methods are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over a time span of almost a decade, the FUELCON project in nuclear engineering has led to a fully functional expert system and spawned sequel projects. Its task is in-core fuel management, also called `refueling', i.e., good fuel-allocation for reloading the core of a given nuclear reactor, for a given operation cycle. The task is crucial for keeping down operation costs at nuclear power plants. Fuel comes in different types and is positioned in a grid representing the core of a reactor. The tool is useful for practitioners but also helps the expert in the domain to test his or her rules of thumb and to discover new ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimization techniques that address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimization method that refines the partition at each graph level. To date, these algorithms have been used almost exclusively to minimize the cut-edge weight in the graph with the aim of minimizing the parallel communication overhead. However, it has been shown that for certain classes of problems, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. Therefore, in this paper, the authors modify the multilevel algorithms to optimize a cost function based on the aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.