900 resultados para Discrete Sampling
Resumo:
The timed-initiation paradigm developed by Ghez and colleagues (1997) has revealed two modes of motor planning: continuous and discrete. Continuous responding occurs when targets are separated by less than 60° of spatial angle, and discrete responding occurs when targets are separated by greater than 60°. Although these two modes are thought to reflect the operation of separable strategic planning systems, a new theory of movement preparation, the Dynamic Field Theory, suggests that two modes emerge flexibly from the same system. Experiment 1 replicated continuous and discrete performance using a task modified to allow for a critical test of the single system view. In Experiment 2, participants were allowed to correct their movements following movement initiation (the standard task does not allow corrections). Results showed continuous planning performance at large and small target separations. These results are consistent with the proposal that the two modes reflect the time-dependent “preshaping” of a single planning system.
Resumo:
"How large a sample is needed to survey the bird damage to corn in a county in Ohio or New Jersey or South Dakota?" Like those in the Bureau of Sport Fisheries and Wildlife and the U.S.D.A. who have been faced with a question of this sort we found only meager information on which to base an answer, whether the problem related to a county in Ohio or to one in New Jersey, or elsewhere. Many sampling methods and rates of sampling did yield reliable estimates but the judgment was often intuitive or based on the reasonableness of the resulting data. Later, when planning the next study or survey, little additional information was available on whether 40 samples of 5 ears each or 5 samples of 200 ears should be examined, i.e., examination of a large number of small samples or a small number of large samples. What information is needed to make a reliable decision? Those of us involved with the Agricultural Experiment Station regional project concerned with the problems of bird damage to crops, known as NE-49, thought we might supply an ans¬wer if we had a corn field in which all the damage was measured. If all the damage were known, we could then sample this field in various ways and see how the estimates from these samplings compared to the actual damage and pin-point the best and most accurate sampling procedure. Eventually the investigators in four states became involved in this work1 and instead of one field we were able to broaden the geographical base by examining all the corn ears in 2 half-acre sections of fields in each state, 8 sections in all. When the corn had matured well past the dough stage, damage on each corn ear was assessed, without removing the ear from the stalk, by visually estimating the percent of the kernel surface which had been destroyed and rating it in one of 5 damage categories. Measurements (by row-centimeters) of the rows of kernels pecked by birds also were made on selected ears representing all categories and all parts of each field section. These measurements provided conversion factors that, when fed into a computer, were applied to the more than 72,000 visually assessed ears. The machine now had in its memory and could supply on demand a map showing each ear, its location and the intensity of the damage.
Resumo:
Contamination by butyltin compounds (BTs) has been reported in estuarine environments worldwide, with serious impacts on the biota of these areas. Considering that BTs can be degraded by varying environmental conditions such as incident light and salinity, the short-term variations in such factors may lead to inaccurate estimates of BTs concentrations in nature. Therefore, the present study aimed to evaluate the possibility that measurements of BTs in estuarine sediments are influenced by different sampling conditions, including period of the day (day or night), tidal zone (intertidal or subtidal), and tides (high or low). The study area is located on the Brazilian southeastern coast, Sao Vicente Estuary, at Pescadores Beach, where BT contamination was previously detected. Three replicate samples of surface sediment were collected randomly in each combination of period of the day, tidal zone, and tide condition, from three subareas along the beach, totaling 72 samples. BTs were analyzed by GC-PFPD using a tin filter and a VF-5 column, by means of a validated method. The concentrations of tributyltin (TBT), dibutyltin (DBT), and monobutyltin (MBT) ranged from undetectable to 161 ng Sn g(-1) (d.w.). In most samples (71%), only MBT was quantifiable, whereas TBTs were measured in only 14, suggesting either an old contamination or rapid degradation processes. DBT was found in 27 samples, but could be quantified in only one. MBT concentrations did not differ significantly with time of day, zones, or tide conditions. DBT and TBT could not be compared under all these environmental conditions, because only a few samples were above the quantification limit. Pooled samples of TBT did not reveal any difference between day and night. These results indicated that, in assessing contamination by butyltin compounds, surface-sediment samples can be collected in any environmental conditions. However, the wide variation of BTs concentrations in the study area, i.e., over a very small geographic scale, illustrates the need for representative hierarchical and composite sampling designs that are compatible with the multiscalar temporal and spatial variability common to most marine systems. The use of such sampling designs will be necessary for future attempts to quantitatively evaluate and monitor the occurrence and impact of these compounds in nature
Resumo:
We prove some estimates on the spectrum of the Laplacian of the total space of a Riemannian submersion in terms of the spectrum of the Laplacian of the base and the geometry of the fibers. When the fibers of the submersions are compact and minimal, we prove that the spectrum of the Laplacian of the total space is discrete if and only if the spectrum of the Laplacian of the base is discrete. When the fibers are not minimal, we prove a discreteness criterion for the total space in terms of the relative growth of the mean curvature of the fibers and the mean curvature of the geodesic spheres in the base. We discuss in particular the case of warped products.
Resumo:
Background: Cellulose consisting of arrays of linear beta-1,4 linked glucans, is the most abundant carbon-containing polymer present in biomass. Recalcitrance of crystalline cellulose towards enzymatic degradation is widely reported and is the result of intra-and inter-molecular hydrogen bonds within and among the linear glucans. Cellobiohydrolases are enzymes that attack crystalline cellulose. Here we report on two forms of glycosyl hydrolase family 7 cellobiohydrolases common to all Aspergillii that attack Avicel, cotton cellulose and other forms of crystalline cellulose. Results: Cellobiohydrolases Cbh1 and CelD have similar catalytic domains but only Cbh1 contains a carbohydrate-binding domain (CBD) that binds to cellulose. Structural superpositioning of Cbh1 and CelD on the Talaromyces emersonii Cel7A 3-dimensional structure, identifies the typical tunnel-like catalytic active site while Cbh1 shows an additional loop that partially obstructs the substrate-fitting channel. CelD does not have a CBD and shows a four amino acid residue deletion on the tunnel-obstructing loop providing a continuous opening in the absence of a CBD. Cbh1 and CelD are catalytically functional and while specific activity against Avicel is 7.7 and 0.5 U. mg prot-1, respectively specific activity on pNPC is virtually identical. Cbh1 is slightly more stable to thermal inactivation compared to CelD and is much less sensitive to glucose inhibition suggesting that an open tunnel configuration, or absence of a CBD, alters the way the catalytic domain interacts with the substrate. Cbh1 and CelD enzyme mixtures on crystalline cellulosic substrates show a strong combinatorial effort response for mixtures where Cbh1 is present in 2: 1 or 4: 1 molar excess. When CelD was overrepresented the combinatorial effort could only be partially overcome. CelD appears to bind and hydrolyze only loose cellulosic chains while Cbh1 is capable of opening new cellulosic substrate molecules away from the cellulosic fiber. Conclusion: Cellobiohydrolases both with and without a CBD occur in most fungal genomes where both enzymes are secreted, and likely participate in cellulose degradation. The fact that only Cbh1 binds to the substrate and in combination with CelD exhibits strong synergy only when Cbh1 is present in excess, suggests that Cbh1 unties enough chains from cellulose fibers, thus enabling processive access of CelD.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
For a locally compact Hausdorff space K and a Banach space X we denote by C-0(K, X) the space of X-valued continuous functions on K which vanish at infinity, provided with the supremum norm. Let n be a positive integer, Gamma an infinite set with the discrete topology, and X a Banach space having non-trivial cotype. We first prove that if the nth derived set of K is not empty, then the Banach-Mazur distance between C-0(Gamma, X) and C-0(K, X) is greater than or equal to 2n + 1. We also show that the Banach-Mazur distance between C-0(N, X) and C([1, omega(n)k], X) is exactly 2n + 1, for any positive integers n and k. These results extend and provide a vector-valued version of some 1970 Cambern theorems, concerning the cases where n = 1 and X is the scalar field.
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Within-site variability in species detectability is a problem common to many biodiversity assessments and can strongly bias the results. Such variability can be caused by many factors, including simple counting inaccuracies, which can be solved by increasing sample size, or by temporal changes in species behavior, meaning that the way the temporal sampling protocol is designed is also very important. Here we use the example of mist-netted tropical birds to determine how design decisions in the temporal sampling protocol can alter the data collected and how these changes might affect the detection of ecological patterns, such as the species-area relationship (SAR). Using data from almost 3400 birds captured from 21,000 net-hours at 31 sites in the Brazilian Atlantic Forest, we found that the magnitude of ecological trends remained fairly stable, but the probability of detecting statistically significant ecological patterns varied depending on sampling effort, time of day and season in which sampling was conducted. For example, more species were detected in the wet season, but the SAR was strongest in the dry season. We found that the temporal distribution of sampling effort was more important than its total amount, discovering that similar ecological results could have been obtained with one-third of the total effort, as long as each site had been equally sampled over 2 yr. We discuss that projects with the same sampling effort and spatial design, but with different temporal sampling protocol are likely to report different ecological patterns, which may ultimately lead to inappropriate conservation strategies.
Resumo:
We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This Letter reports an investigation on the optical properties of copper nanocubes as a function of size as modeled by the discrete dipole approximation. In the far-field, our results showed that the extinction resonances shifted from 595 to 670 nm as the size increased from 20 to 100 nm. Also, the highest optical efficiencies for absorption and scattering were obtained for nanocubes that were 60 and 100 nm in size, respectively. In the near-field, the electric-field amplitudes were investigated considering 514, 633 and 785 nm as the excitation wavelengths. The E-fields increased with size, being the highest at 633 nm. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
This thesis covers sampling and analytical procedures for isocyanates (R-NCO) and amines (R-NH2), two kinds of chemicals frequently used in association with the polymeric material polyurethane (PUR). Exposure to isocyanates may result in respiratory disorders and dermal sensitisation, and they are one of the main causes of occupational asthma. Several of the aromatic diamines associated with PUR production are classified as suspected carcinogens. Hence, the presence of these chemicals in different exposure situations must be monitored. In the context of determining isocyanates in air, the methodologies included derivatisation with the reagent di-n-butylamine (DBA) upon collection and subsequent determination using liquid chromatography (LC) and mass spectrometric detection (MS). A user-friendly solvent-free sampler for collection of airborne isocyanates was developed as an alternative to a more cumbersome impinger-filter sampling technique. The combination of the DBA reagent together with MS detection techniques revealed several new exposure situations for isocyanates, such as isocyanic acid during thermal degradation of PUR and urea-based resins. Further, a method for characterising isocyanates in technical products used in the production of PUR was developed. This enabled determination of isocyanates in air for which pure analytical standards are missing. Tandem MS (MS/MS) determination of isocyanates in air below 10-6 of the threshold limit values was achieved. As for the determination of amines, the analytical methods included derivatisation into pentafluoropropionic amide or ethyl carbamate ester derivatives and subsequent MS analysis. Several amines in biological fluids, as markers of exposure for either the amines themselves or the corresponding isocyanates, were determined by LC-MS/MS at amol level. In aqueous extraction solutions of flexible PUR foam products, toluene diamine and related compounds were found. In conclusion, this thesis demonstrates the usefulness of well characterised analytical procedures and techniques for determination of hazardous compounds. Without reliable and robust methodologies there is a risk that exposure levels will be underestimated or, even worse, that relevant compounds will be completely missed.