804 resultados para students that use drugs
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
In this work, we address the issue of modeling squeeze film damping in nontrivial geometries that are not amenable to analytical solutions. The design and analysis of microelectromechanical systems (MEMS) resonators, especially those that use platelike two-dimensional structures, require structural dynamic response over the entire range of frequencies of interest. This response calculation typically involves the analysis of squeeze film effects and acoustic radiation losses. The acoustic analysis of vibrating plates is a very well understood problem that is routinely carried out using the equivalent electrical circuits that employ lumped parameters (LP) for acoustic impedance. Here, we present a method to use the same circuit with the same elements to account for the squeeze film effects as well by establishing an equivalence between the parameters of the two domains through a rescaled equivalent relationship between the acoustic impedance and the squeeze film impedance. Our analysis is based on a simple observation that the squeeze film impedance rescaled by a factor of jx, where x is the frequency of oscillation, qualitatively mimics the acoustic impedance over a large frequency range. We present a method to curvefit the numerically simulated stiffness and damping coefficients which are obtained using finite element analysis (FEA) analysis. A significant advantage of the proposed method is that it is applicable to any trivial/nontrivial geometry. It requires very limited finite element method (FEM) runs within the frequency range of interest, hence reducing the computational cost, yet modeling the behavior in the entire range accurately. We demonstrate the method using one trivial and one nontrivial geometry.
Resumo:
We present a method of rapidly producing computer-generated holograms that exhibit geometric occlusion in the reconstructed image. Conceptually, a bundle of rays is shot from every hologram sample into the object volume.We use z buffering to find the nearest intersecting object point for every ray and add its complex field contribution to the corresponding hologram sample. Each hologram sample belongs to an independent operation, allowing us to exploit the parallel computing capability of modern programmable graphics processing units (GPUs). Unlike algorithms that use points or planar segments as the basis for constructing the hologram, our algorithm's complexity is dependent on fixed system parameters, such as the number of ray-casting operations, and can therefore handle complicated models more efficiently. The finite number of hologram pixels is, in effect, a windowing function, and from analyzing the Wigner distribution function of windowed free-space transfer function we find an upper limit on the cone angle of the ray bundle. Experimentally, we found that an angular sampling distance of 0:01' for a 2:66' cone angle produces acceptable reconstruction quality. © 2009 Optical Society of America.
Resumo:
The stability of a soil slope is usually analyzed by limit equilibrium methods, in which the identification of the critical slip surface is of principal importance. In this study the spline curve in conjunction with a genetic algorithm is used to search the critical slip surface, and Spencer's method is employed to calculate the factor of safety. Three examples are presented to illustrate the reliability and efficiency of the method. Slip surfaces defined by a series of straight lines are compared with those defined by spline curves, and the results indicate that use of spline curves renders better results for a given number of slip surface nodal points comparing with the approximation using straight line segments.
Resumo:
Published as an article in: Journal of Environmental Economics and Management, 2005, vol. 50, issue 2, pages 387-407.
Resumo:
Based on the recovery rates for Thalassia testudinum measured in this study for scars of these excavation depths and assuming a linear recovery horizon, we estimate that it would take ~ 6.9 years (95% CI. = 5.4 to 9.6 years) for T. testudinum to return to the same density as recorded for the adjacent undisturbed population. The application of water soluble fertilizers and plant growth hormones by mechanical injection into the sediments adjacent to ten propellor scars at Lignumvitae State Botanical Site did not significantly increase the recovery rate of Thalassia testudinum or Halodule wrightii. An alternative method of fertilization and restoration of propellor scars was also tested by a using a method of “compressed succession” where Halodule wrightii is substituted for T. testudinum in the initial stages of restoration. Bird roosting stakes were placed among H.wrightii bare root plantings in prop scars to facilitate the defecation of nitrogen and phosphorus enriched feces. In contrast to the fertilizer injection method, the bird stakes produced extremely high recovery rates of transplanted H. wrightii. We conclude that use of a fertilizer/hormone injection machine in the manner described here is not a feasible means of enhancing T. testudinum recovery in propellor scars on soft bottom carbonate sediments. Existing techniques such as the bird stake approach provide a reliable, and inexpensive alternative method that should be considered for application to restoration of seagrasses in these environments. Document contains 40 pages)
Resumo:
Policy makers, natural resource managers, regulators, and the public often call on scientists to estimate the potential ecological changes caused by both natural and human-induced stresses, and to determine how those changes will impact people and the environment. To develop accurate forecasts of ecological changes we need to: 1) increase understanding of ecosystem composition, structure, and functioning, 2) expand ecosystem monitoring and apply advanced scientific information to make these complex data widely available, and 3) develop and improve forecast and interpretative tools that use a scientific basis to assess the results of management and science policy actions. (PDF contains 120 pages)
Resumo:
The Alliance for Coastal Technologies (ACT) convened a workshop, sponsored by the Hawaii-Pacific and Alaska Regional Partners, entitled Underwater Passive Acoustic Monitoring for Remote Regions at the Hawaii Institute of Marine Biology from February 7-9, 2007. The workshop was designed to summarize existing passive acoustic technologies and their uses, as well as to make strategic recommendations for future development and collaborative programs that use passive acoustic tools for scientific investigation and resource management. The workshop was attended by 29 people representing three sectors: research scientists, resource managers, and technology developers. The majority of passive acoustic tools are being developed by individual scientists for specific applications and few tools are available commercially. Most scientists are developing hydrophone-based systems to listen for species-specific information on fish or cetaceans; a few scientists are listening for biological indicators of ecosystem health. Resource managers are interested in passive acoustics primarily for vessel detection in remote protected areas and secondarily to obtain biological and ecological information. The military has been monitoring with hydrophones for decades;however, data and signal processing software has not been readily available to the scientific community, and future collaboration is greatly needed. The challenges that impede future development of passive acoustics are surmountable with greater collaboration. Hardware exists and is accessible; the limits are in the software and in the interpretation of sounds and their correlation with ecological events. Collaboration with the military and the private companies it contracts will assist scientists and managers with obtaining and developing software and data analysis tools. Collaborative proposals among scientists to receive larger pools of money for exploratory acoustic science will further develop the ability to correlate noise with ecological activities. The existing technologies and data analysis are adequate to meet resource managers' needs for vessel detection. However, collaboration is needed among resource managers to prepare large-scale programs that include centralized processing in an effort to address the lack of local capacity within management agencies to analyze and interpret the data. Workshop participants suggested that ACT might facilitate such collaborations through its website and by providing recommendations to key agencies and programs, such as DOD, NOAA, and I00s. There is a need to standardize data formats and archive acoustic environmental data at the national and international levels. Specifically, there is a need for local training and primers for public education, as well as by pilot demonstration projects, perhaps in conjunction with National Marine Sanctuaries. Passive acoustic technologies should be implemented immediately to address vessel monitoring needs. Ecological and health monitoring applications should be developed as vessel monitoring programs provide additional data and opportunities for more exploratory research. Passive acoustic monitoring should also be correlated with water quality monitoring to ease integration into long-term monitoring programs, such as the ocean observing systems. [PDF contains 52 pages]
Resumo:
Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.
Resumo:
This dissertation comprises three essays that use theory-based experiments to gain understanding of how cooperation and efficiency is affected by certain variables and institutions in different types of strategic interactions prevalent in our society.
Chapter 2 analyzes indefinite horizon two-person dynamic favor exchange games with private information in the laboratory. Using a novel experimental design to implement a dynamic game with a stochastic jump signal process, this study provides insights into a relation where cooperation is without immediate reciprocity. The primary finding is that favor provision under these conditions is considerably less than under the most efficient equilibrium. Also, individuals do not engage in exact score-keeping of net favors, rather, the time since the last favor was provided affects decisions to stop or restart providing favors.
Evidence from experiments in Cournot duopolies is presented in Chapter 3 where players indulge in a form of pre-play communication, termed as revision phase, before playing the one-shot game. During this revision phase individuals announce their tentative quantities, which are publicly observed, and revisions are costless. The payoffs are determined only by the quantities selected at the end under real time revision, whereas in a Poisson revision game, opportunities to revise arrive according to a synchronous Poisson process and the tentative quantity corresponding to the last revision opportunity is implemented. Contrasting results emerge. While real time revision of quantities results in choices that are more competitive than the static Cournot-Nash, significantly lower quantities are implemented in the Poisson revision games. This shows that partial cooperation can be sustained even when individuals interact only once.
Chapter 4 investigates the effect of varying the message space in a public good game with pre-play communication where player endowments are private information. We find that neither binary communication nor a larger finite numerical message space results in any efficiency gain relative to the situation without any form of communication. Payoffs and public good provision are higher only when participants are provided with a discussion period through unrestricted text chat.
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
30 p.
Resumo:
21 p.