934 resultados para search methods
Resumo:
Aims. In this work we search for the signatures of low-dimensional chaos in the temporal behavior of the Kepler-field blazar W2R 1946+42. Methods. We use a publicly available, similar to 160 000-point-long and mostly equally spaced light curve of W2R 1946+42. We apply the correlation integral method to both real datasets and phase randomized surrogates. Results. We are not able to confirm the presence of low-dimensional chaos in the light curve. This result, however, still leads to some important implications for blazar emission mechanisms, which are discussed.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
Background: In the post-genomic era where sequences are being determined at a rapid rate, we are highly reliant on computational methods for their tentative biochemical characterization. The Pfam database currently contains 3,786 families corresponding to ``Domains of Unknown Function'' (DUF) or ``Uncharacterized Protein Family'' (UPF), of which 3,087 families have no reported three-dimensional structure, constituting almost one-fourth of the known protein families in search for both structure and function. Results: We applied a `computational structural genomics' approach using five state-of-the-art remote similarity detection methods to detect the relationship between uncharacterized DUFs and domain families of known structures. The association with a structural domain family could serve as a start point in elucidating the function of a DUF. Amongst these five methods, searches in SCOP-NrichD database have been applied for the first time. Predictions were classified into high, medium and low-confidence based on the consensus of results from various approaches and also annotated with enzyme and Gene ontology terms. 614 uncharacterized DUFs could be associated with a known structural domain, of which high confidence predictions, involving at least four methods, were made for 54 families. These structure-function relationships for the 614 DUF families can be accessed on-line at http://proline.biochem.iisc.ernet.in/RHD_DUFS/. For potential enzymes in this set, we assessed their compatibility with the associated fold and performed detailed structural and functional annotation by examining alignments and extent of conservation of functional residues. Detailed discussion is provided for interesting assignments for DUF3050, DUF1636, DUF1572, DUF2092 and DUF659. Conclusions: This study provides insights into the structure and potential function for nearly 20 % of the DUFs. Use of different computational approaches enables us to reliably recognize distant relationships, especially when they converge to a common assignment because the methods are often complementary. We observe that while pointers to the structural domain can offer the right clues to the function of a protein, recognition of its precise functional role is still `non-trivial' with many DUF domains conserving only some of the critical residues. It is not clear whether these are functional vestiges or instances involving alternate substrates and interacting partners. Reviewers: This article was reviewed by Drs Eugene Koonin, Frank Eisenhaber and Srikrishna Subramanian.
Resumo:
The stability of a soil slope is usually analyzed by limit equilibrium methods, in which the identification of the critical slip surface is of principal importance. In this study the spline curve in conjunction with a genetic algorithm is used to search the critical slip surface, and Spencer's method is employed to calculate the factor of safety. Three examples are presented to illustrate the reliability and efficiency of the method. Slip surfaces defined by a series of straight lines are compared with those defined by spline curves, and the results indicate that use of spline curves renders better results for a given number of slip surface nodal points comparing with the approximation using straight line segments.
Resumo:
Secondary metabolites are produced by aquatic plants, and in some instances, exudation of these metabolites into the surrounding water has been detected. To determine whether infestations of Eurasian watermilfoil or hydrilla produce such exudates, plant tissues and water samples were collected from laboratory cultures and pond populations and were analyzed using solid phase extraction, HPLC, and various methods of mass spectrometry including electrospray ionization, GC/MS, electron impact and chemical ionization. Previously reported compounds such as tellimagrandin II (from Eurasian watermilfoil) and a caffeic acid ester (from hvdrilla), along with a newly discovered flavonoid, cyanidin 3 dimalonyl glucoside (from hydrilla), were readily detected in plant tissues used in this research but were not detected in any of the water samples. If compounds are being released, as suggested by researchers using axenic cultures, we hypothesize that they may be rapidly degraded by bacteria and therefore undetectable.
Resumo:
Many particles proposed by theories, such as GUT monopoles, nuclearites and 1/5 charge superstring particles, can be categorized as Slow-moving, Ionizing, Massive Particles (SIMPs).
Detailed calculations of the signal-to-noise ratios in vanous acoustic and mechanical methods for detecting such SIMPs are presented. It is shown that the previous belief that such methods are intrinsically prohibited by the thermal noise is incorrect, and that ways to solve the thermal noise problem are already within the reach of today's technology. In fact, many running and finished gravitational wave detection ( GWD) experiments are already sensitive to certain SIMPs. As an example, a published GWD result is used to obtain a flux limit for nuclearites.
The result of a search using a scintillator array on Earth's surface is reported. A flux limit of 4.7 x 10^(-12) cm^(-2)sr^(-1)s^(-1) (90% c.l.) is set for any SIMP with 2.7 x 10^(-4) less than β less than 5 x 10^(-3) and ionization greater than 1/3 of minimum ionizing muons. Although this limit is above the limits from underground experiments for typical supermassive particles (10^(16)GeV), it is a new limit in certain β and ionization regions for less massive ones (~10^9 GeV) not able to penetrate deep underground, and implies a stringent limit on the fraction of the dark matter that can be composed of massive electrically and/ or magnetically charged particles.
The prospect of the future SIMP search in the MACRO detector is discussed. The special problem of SIMP trigger is examined and a circuit proposed, which may solve most of the problems of the previous ones proposed or used by others and may even enable MACRO to detect certain SIMP species with β as low as the orbital velocity around the earth.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
Background: Bronchiolitis caused by the respiratory syncytial virus (RSV) and its related complications are common in infants born prematurely, with severe congenital heart disease, or bronchopulmonary dysplasia, as well as in immunosuppressed infants. There is a rich literature on the different aspects of RSV infection with a focus, for the most part, on specific risk populations. However, there is a need for a systematic global analysis of the impact of RSV infection in terms of use of resources and health impact on both children and adults. With this aim, we performed a systematic search of scientific evidence on the social, economic, and health impact of RSV infection. Methods: A systematic search of the following databases was performed: MEDLINE, EMBASE, Spanish Medical Index, MEDES-MEDicina in Spanish, Cochrane Plus Library, and Google without time limits. We selected 421 abstracts based on the 6,598 articles identified. From these abstracts, 4 RSV experts selected the most relevant articles. They selected 65 articles. After reading the full articles, 23 of their references were also selected. Finally, one more article found through a literature information alert system was included. Results: The information collected was summarized and organized into the following topics: 1. Impact on health (infections and respiratory complications, mid-to long-term lung function decline, recurrent wheezing, asthma, other complications such as otitis and rhino-conjunctivitis, and mortality; 2. Impact on resources (visits to primary care and specialists offices, emergency room visits, hospital admissions, ICU admissions, diagnostic tests, and treatments); 3. Impact on costs (direct and indirect costs); 4. Impact on quality of life; and 5. Strategies to reduce the impact (interventions on social and hygienic factors and prophylactic treatments). Conclusions: We concluded that 1. The health impact of RSV infection is relevant and goes beyond the acute episode phase; 2. The health impact of RSV infection on children is much better documented than the impact on adults; 3. Further research is needed on mid-and long-term impact of RSV infection on the adult population, especially those at high-risk; 4. There is a need for interventions aimed at reducing the impact of RSV infection by targeting health education, information, and prophylaxis in high-risk populations.
Muitiobjective pressurized water reactor reload core design by nondominated genetic algorithm search
Resumo:
The design of pressurized water reactor reload cores is not only a formidable optimization problem but also, in many instances, a multiobjective problem. A genetic algorithm (GA) designed to perform true multiobjective optimization on such problems is described. Genetic algorithms simulate natural evolution. They differ from most optimization techniques by searching from one group of solutions to another, rather than from one solution to another. New solutions are generated by breeding from existing solutions. By selecting better (in a multiobjective sense) solutions as parents more often, the population can be evolved to reveal the trade-off surface between the competing objectives. An example illustrating the effectiveness of this novel method is presented and analyzed. It is found that in solving a reload design problem the algorithm evaluates a similar number of loading patterns to other state-of-the-art methods, but in the process reveals much more information about the nature of the problem being solved. The actual computational cost incurred depends: on the core simulator used; the GA itself is code independent.
Resumo:
The University of Cambridge is unusual in that its Department of Engineering is a single department which covers virtually all branches of engineering under one roof. In their first two years of study, our undergrads study the full breadth of engineering topics and then have to choose a specialization area for the final two years of study. Here we describe part of a course, given towards the end of their second year, which is designed to entice these students to specialize in signal processing and information engineering topics for years 3 and 4. The course is based around a photo editor and an image search application, and it requires no prior knowledge of the z-transform or of 2-dimensional signal processing. It does assume some knowledge of 1-D convolution and basic Fourier methods and some prior exposure to Matlab. The subject of this paper, the photo editor, is written in standard Matlab m-files which are fully visible to the students and help them to see how specific algorithms are implemented in detail. © 2011 IEEE.
Resumo:
Images represent a valuable source of information for the construction industry. Due to technological advancements in digital imaging, the increasing use of digital cameras is leading to an ever-increasing volume of images being stored in construction image databases and thus makes it hard for engineers to retrieve useful information from them. Content-Based Search Engines are tools that utilize the rich image content and apply pattern recognition methods in order to retrieve similar images. In this paper, we illustrate several project management tasks and show how Content-Based Search Engines can facilitate automatic retrieval, and indexing of construction images in image databases.
Resumo:
Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.
Resumo:
The relationship between structures of complex fluorides and spectral structure of Eu(II) ion in complex fluorides (AB(m)F(n)) is investigated by means of pattern recognition methods, such as KNN, ALKNN, BAYES, LLM, SIMCA and PCA. A learning set consisting of 32 f-f transition emission host compounds and 31 d-f transition emission host compounds and a test set consisting of 27 host compounds were characterized by 12 crystal structural parameters. These parameters, i.e. features, were reduced from 12 to 6 by multiple criteria for the classification of these host compounds as f-f transition emission or d-f transition emission. A recognition rate from 79.4 to 96.8% and prediction capabilities from 85.2 to 92.6% were obtained. According to the above results, the spectral structures of Eu(II) ion in seven unknown host lattices were predicted.
Resumo:
Shrimps Litopenaeus vannamei with initial body weight of 2.108 +/- 0.036 g were sampled for specific growth rates (SGR) and body color measurements for 50 days under different light sources (incandescent lamp, IL; cool-white fluorescent lamp, FL; metal halide lamp, MHL; and control without lamp) and different illumination methods (illumination only in day, IOD, and illumination day and night, IDN). Body color of L. vannamei was measured according to the free astaxanthin concentration (FAC) of shrimp. The SGR, food intake (FI), feed conversion efficiency (FCE) and FAC of shrimps showed significant differences among the experimental treatment groups (P < 0.05). Maximum and minimum SGR occurred under IOD by MHL and IDN by FL, respectively (difference 56.34%). The FI of shrimp for the control group did not rank lowest among treatments, confirming that shrimp primarily use scent, not vision, to search for food. FI and FCE of shrimps were both the lowest among treatment groups under IDN by FL and growth was slow, thus FL is not a preferred light source for shrimp culture. Under IOD by MHL, shrimps had the highest FCE and the third highest FI among treatment groups ensuring rapid growth. FAC of shrimp were about 3.31 +/- 0.20 mg/kg. When under IOD by MHL and IDN by FL, FAC was significantly higher than the other treatments (P < 0.05). To summarize, when illuminated by MHL, L. vannamei had not only vivid body color due to high astaxanthin concentration but also rapid growth. Therefore, MHL is an appropriate indoor light source for shrimp super-intensive culture. SGR of shrimp was in significantly negative correlation to FAC of shrimp (P < 0.05). Thus, when FAC increased, SGR did not always follow, suggesting that the purpose of astaxanthin accumulation was not for growth promotion but for protection against intense light. (c) 2005 Elsevier B.V. All rights reserved.