958 resultados para Maximum Set Splitting Problem
Resumo:
One of the most important determinants of dermatological and systemic penetration after topical application is the delivery or flux of solutes into or through the skin. The maximum dose of solute able to be delivered over a given period of time and area of application is defined by its maximum flux (J(max), mol per cm(2) per h) from a given vehicle. In this work, J(max) values from aqueous solution across human skin were acquired or estimated from experimental data and correlated with solute physicochemical properties. Whereas epidermal permeability coefficients (k(p)) are optimally correlated to solute octanol-water partition coefficient (K-ow) and molecular weight (MW) was found to be the dominant determinant of J(max) for this literature data set: log J(max)=-3.90-0.0190MW (n=87, r(2)=0.847, p
Resumo:
Research expeditions into remote areas to collect biological specimens provide vital information for understanding biodiversity. However, major expeditions to little-known areas are expensive and time consuming, time is short, and well-trained people are difficult to find. In addition, processing the collections and obtaining accurate identifications takes time and money. In order to get the maximum return for the investment, we need to determine the location of the collecting expeditions carefully. In this study we used environmental variables and information on existing collecting localities to help determine the sites of future expeditions. Results from other studies were used to aid in the selection of the environmental variables, including variables relating to temperature, rainfall, lithology and distance between sites. A survey gap analysis tool based on 'ED complementarity' was employed to select the sites that would most likely contribute the most new taxa. The tool does not evaluate how well collected a previously visited site survey site might be; however, collecting effort was estimated based on species accumulation curves. We used the number of collections and/or number of species at each collecting site to eliminate those we deemed poorly collected. Plants, birds, and insects from Guyana were examined using the survey gap analysis tool, and sites for future collecting expeditions were determined. The south-east section of Guyana had virtually no collecting information available. It has been inaccessible for many years for political reasons and as a result, eight of the first ten sites selected were in that area. In order to evaluate the remainder of the country, and because there are no immediate plans by the Government of Guyana to open that area to exploration, that section of the country was not included in the remainder of the study. The range of the ED complementarity values dropped sharply after the first ten sites were selected. For plants, the group for which we had the most records, areas selected included several localities in the Pakaraima Mountains, the border with the south-east, and one site in the north-west. For birds, a moderately collected group, the strongest need was in the north-west followed by the east. Insects had the smallest data set and the largest range of ED complementarity values; the results gave strong emphasis to the southern parts of the country, but most of the locations appeared to be equidistant from one another, most likely because of insufficient data. Results demonstrate that the use of a survey gap analysis tool designed to solve a locational problem using continuous environmental data can help maximize our resources for gathering new information on biodiversity. (c) 2005 The Linnean Society of London.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
A restricted maximum likelihood analysis applied to an animal model showed no significant differences (P > 0.05) in pH value of the longissimus dorsi measured at 24 h post-mortem (pH24) between high and low lines of Large White pigs selected over 4 years for post-weaning growth rate on restricted feeding. Genetic and phenotypic correlations between pH24 and production and carcass traits were estimated using all performance testing records combined with the pH24 measurements (5.05–7.02) on slaughtered animals. The estimate of heritability for pH24 was moderate (0.29 ± 0.18). Genetic correlations between pH24 and production or carcass composition traits, except for ultrasonic backfat (UBF), were not significantly different from zero. UBF had a moderate, positive genetic correlation with pH24 (0.24 ± 0.33). These estimates of genetic correlations affirmed that selection for increased growth rate on restricted feeding is likely to result in limited changes in pH24 and pork quality since the selection does not put a high emphasis on reduced fatness.
Resumo:
The maximum possible volume of a simple, non-Steiner (v, 3, 2) trade was determined for all v by Xhosrovshahi and Torabi (Ars Combinatoria 51 (1999), 211-223), except that in the-case v equivalent to 5 (mod 6), v >= 23, they were only able to provide an upper, bound on the volume. In this paper we construct trades with volume equal to that bound for all v equivalent to 5 (mod 6), thus completing the problem.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
This thesis is presented in two parts. The first part is an attempt to set out a framework of factors influencing the problem solving stage of the architectural design process. The discussion covers the nature of architectural problems and some of the main ways in which they differ from other types of design problems. The structure of constraints that both the problem and the architect impose upon solutions are seen as of great importance in defining the type of design problem solving situation. The problem solver, or architect, is then studied. The literature of the psychology of thinking is surveyed for relevant work . All of the traditional schools of psychology are found wanting in terms of providing a comprehensive theory of thinking. Various types of thinking are examined, particularly structural and productive thought, for their relevance to design problem solving. Finally some reported common traits of architects are briefly reviewed. The second section is a report of u~o main experiments which model some aspects of architectural design problem solving. The first experiment examines the way in which architects come to understand the structure of their problems. The performances of first and final year architectural students are compared with those of postgraduate science students and sixth form pupils. On the whole these groups show significantly different results and also different cognitive strategies. The second experiment poses design problems which involve both subjective and objective criteria, and examines the way in which final year architectural students are able to relate the different types of constraint produced. In the final section the significance of all the results is suggested. Some educational and methodological implications are discussed and some further experiments and investigations are proposed.
Resumo:
Qualitative research can make a valuable contribution to the study of quality and safety in health care. Sound ways of appraising qualitative research are needed, but currently there are many different proposals with few signs of an emerging consensus. One problem has been the tendency to treat qualitative research as a unified field. We distinguish universal features of quality from those specific to methodology and offer a set of minimally prescriptive prompts to assist with the assessment of generic features of qualitative research. In using these, account will need to be taken of the particular method of data collection and methodological approach being used. There may be a need for appraisal criteria suited to the different methods of qualitative data collection and to different methodological approaches. These more specific criteria would help to distinguish fatal flaws from more minor errors in the design, conduct, and reporting of qualitative research. There will be difficulties in doing this because some aspects of qualitative research, particularly those relating to quality of insight and interpretation, will remain difficult to appraise and will rely largely on subjective judgement.
Resumo:
The existing assignment problems for assigning n jobs to n individuals are limited to the considerations of cost or profit measured as crisp. However, in many real applications, costs are not deterministic numbers. This paper develops a procedure based on Data Envelopment Analysis method to solve the assignment problems with fuzzy costs or fuzzy profits for each possible assignment. It aims to obtain the points with maximum membership values for the fuzzy parameters while maximizing the profit or minimizing the assignment cost. In this method, a discrete approach is presented to rank the fuzzy numbers first. Then, corresponding to each fuzzy number, we introduce a crisp number using the efficiency concept. A numerical example is used to illustrate the usefulness of this new method. © 2012 Operational Research Society Ltd. All rights reserved.
Resumo:
An expert system (ES) is a class of computer programs developed by researchers in artificial intelligence. In essence, they are programs made up of a set of rules that analyze information about a specific class of problems, as well as provide analysis of the problems, and, depending upon their design, recommend a course of user action in order to implement corrections. ES are computerized tools designed to enhance the quality and availability of knowledge required by decision makers in a wide range of industries. Decision-making is important for the financial institutions involved due to the high level of risk associated with wrong decisions. The process of making decision is complex and unstructured. The existing models for decision-making do not capture the learned knowledge well enough. In this study, we analyze the beneficial aspects of using ES for decision- making process.
Resumo:
* Partialy supported by contract MM 523/95 with Ministry of Science and Technologies of Republic of Bulgaria.
Resumo:
The problem of a finding of ranging of the objects nearest to the cyclic relation set by the expert between objects is considered. Formalization of the problem arising at it is resulted. The algorithm based on a method of the consecutive analysis of variants and the analysis of conditions of acyclicity is offered.