940 resultados para National Science Foundation (U.S.). Research Applied to National Needs Program.
Resumo:
We analyze a real data set pertaining to reindeer fecal pellet-group counts obtained from a survey conducted in a forest area in northern Sweden. In the data set, over 70% of counts are zeros, and there is high spatial correlation. We use conditionally autoregressive random effects for modeling of spatial correlation in a Poisson generalized linear mixed model (GLMM), quasi-Poisson hierarchical generalized linear model (HGLM), zero-inflated Poisson (ZIP), and hurdle models. The quasi-Poisson HGLM allows for both under- and overdispersion with excessive zeros, while the ZIP and hurdle models allow only for overdispersion. In analyzing the real data set, we see that the quasi-Poisson HGLMs can perform better than the other commonly used models, for example, ordinary Poisson HGLMs, spatial ZIP, and spatial hurdle models, and that the underdispersed Poisson HGLMs with spatial correlation fit the reindeer data best. We develop R codes for fitting these models using a unified algorithm for the HGLMs. Spatial count response with an extremely high proportion of zeros, and underdispersion can be successfully modeled using the quasi-Poisson HGLM with spatial random effects.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
The transfer coefficients for momentum and heat have been determined for 10 m neutral wind speeds (U-10n) between 0 and 12 m/s using data from the Surface of the Ocean, Fluxes and Interactions with the Atmosphere (SOFIA) and Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Experimentale (SEMAPHORE) experiments. The inertial dissipation method was applied to wind and pseudo virtual temperature spectra from a sonic anemometer, mounted on a platform (ship) which was moving through the turbulence held. Under unstable conditions the assumptions concerning the turbulent kinetic energy (TKE) budget appeared incorrect. Using a bulk estimate for the stability parameter, Z/L (where Z is the height and L is the Obukhov length), this resulted in anomalously low drag coefficients compared to neutral conditions. Determining Z/L iteratively, a low rate of convergence was achieved. It was concluded that the divergence of the turbulent transport of TKE was not negligible under unstable conditions. By minimizing the dependence of the calculated neutral drag coefficient on stability, this term was estimated at about -0.65Z/L. The resulting turbulent fluxes were then in close agreement with other studies at moderate wind speed. The drag and exchange coefficients for low wind speeds were found to be C-en x 10(3) = 2.79U(10n)(-1) + 0.66 (U-10n < 5.2 m/s), C-en x 10(3) = C-hn x 10(3) = 1.2 (U-10n greater than or equal to 5.2 m/s), and C-dn x 10(3) = 11.7U(10n)(-2) + 0.668 (U-10n < 5.5 m/s), which imply a rapid increase of the coefficient values as the wind decreased within the smooth flow regime. The frozen turbulence hypothesis and the assumptions of isotropy and an inertial subrange were found to remain valid at these low wind speeds for these shipboard measurements. Incorporation of a free convection parameterization had little effect.
Resumo:
This study tested the hypothesis that social engagement (SE) with peers is a fundamental aspect of social competence during early childhood. Relations between SE and a set of previously validated social competence indicators, as well as additional variables derived from observation and sociometric interviews were assessed using both variable-centered and person-centered approaches (N = 1453, 696 girls) in 4 samples (3 U.S.A., 1 Portuguese). Directly observed SE was positively associated with broad-band measures of socially competent behavior, peer acceptance, being a target of peers' attention, and also with broad-band personality dimensions. Using individual Q-items significantly associated with SE in 3 of our 4 samples, a hierarchical cluster analysis yielded a 5-cluster solution that grouped cases efficiently. Tests on relations between cluster membership and the set of social competence and other variables revealed significant main effects of cluster membership in the full sample and within each individual sample, separately. With the exception of tests for peer negative preference, children in the lowest SE cluster also had significantly lower overall social competence, personality functioning scores than did children in higher SE clusters.
Resumo:
Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context.
Resumo:
According to some estimates, world's population growth is expected about 50% over the next 50 years. Thus, one of the greatest challenges faced by Engineering is to find effective options to food storage and conservation. Some researchers have investigated how to design durable buildings for storing and conserving food. Nowadays, developing concrete with mechanical resistance for room temperatures is a parameter that can be achieved easily. On the other hand, associating it to low temperature of approximately 35 °C negative requires less empiricism, being necessary a suitable dosage method and a careful selection of the material constituents. This ongoing study involves these parameters. The presented concrete was analyzed through non-destructive tests that examines the material properties periodically and verifies its physical integrity. Concrete with and without incorporated air were studied. The results demonstrated that both are resistant to freezing.
Resumo:
The implementation of confidential contracts between a container liner carrier and its customers, because of the Ocean Shipping Reform Act (OSRA) 1998, demands a revision in the methodology applied in the carrier's planning of marketing and sales. The marketing and sales planning process should be more scientific and with a better use of operational research tools considering the selection of the customers under contracts, the duration of the contracts, the freight, and the container imbalances of these contracts are basic factors for the carrier's yield. This work aims to develop a decision support system based on a linear programming model to generate the business plan for a container liner carrier, maximizing the contribution margin of its freight.
Resumo:
New measurements by the PHENIX experiment at the Relativistic Heavy Ion Collider for. production at midrapidity as a function of transverse momentum ((PT)) and collision centrality in root s(NN) = 200 GeV Au + Au and p + p collisions are presented. They indicate nuclear modification factors (R(AA)) which are similar in both magnitude and trend to those found in earlier pi(0) measurements. Linear fits to R(AA) as a function of (PT) in 5-20 GeV/c show that the slope is consistent with zero within two standard deviations at all centralities, although a slow rise cannot be excluded. Having different statistical and systematic uncertainties, the pi(0) and eta measurements are complementary at high (PT); thus, along with the extended (PT) range of these data they can provide additional constraints for theoretical modeling and the extraction of transport properties.
Resumo:
We report the observation at the Relativistic Heavy Ion Collider of suppression of back-to-back correlations in the direct photon+jet channel in Au+Au relative to p+p collisions. Two-particle correlations of direct photon triggers with associated hadrons are obtained by statistical subtraction of the decay photon-hadron (gamma-h) background. The initial momentum of the away-side parton is tightly constrained, because the parton-photon pair exactly balance in momentum at leading order in perturbative quantum chromodynamics, making such correlations a powerful probe of the in-medium parton energy loss. The away-side nuclear suppression factor, I(AA), in central Au+Au collisions, is 0.32 +/- 0.12(stat)+/- 0.09(syst) for hadrons of 3 < p(T)(h)< 5 in coincidence with photons of 5 < p(T)(gamma)< 15 GeV/c. The suppression is comparable to that observed for high-p(T) single hadrons and dihadrons. The direct photon associated yields in p+p collisions scale approximately with the momentum balance, z(T)equivalent to p(T)(h)/p(T)(gamma), as expected for a measurement of the away-side parton fragmentation function. We compare to Au+Au collisions for which the momentum balance dependence of the nuclear modification should be sensitive to the path-length dependence of parton energy loss.
Resumo:
In this work, we employed the effective coordination concept to study the local environments of the Ge, Sb, and Te atoms in the Ge(m)Sb(2n)Te(m+3n) compounds. From our calculations and analysis, we found an average effective coordination number (ECN) reduction of 1.59, 1.42, and 1.37, for the Ge, Sb, Te atoms in the phase transition from crystalline, ECN=5.55 (Ge), 5.73 (Sb), 4.37 (Te), to the amorphous phase, ECN=3.96 (Ge), 4.31 (Sb), 3.09 (Te), for the Ge(2)Sb(2)Te(5) composition. Similar changes are observed for other compositions. Thus, our results indicate that the coordination changes from the crystalline to amorphous phase are not large as previously assumed in the literature, i.e., from sixfold to fourfold for Ge, which can contribute to obtain a better understanding of the crystalline to amorphous phase transition. (C) 2011 American Institute of Physics. [doi:10.1063/1.3533422]
Resumo:
The tomato culture demands large quantities of mineral nutrients, which are supplied by synthetic fertilizers in the conventional cultivation system. In the organic cultivation system only alternative fertilizers are allowed by the certifiers and accepted as safe for humans and environment. The chemical composition of rice bran, oyster flour, cattle manure and ground charcoal, as well as soils and tomato fruits were evaluated by instrumental neutron activation analysis (INAA). The potential contribution of organic fertilizers to the enrichment of chemical elements in soil and their transfer to fruits was investigated using concentration ratios for fertilizer and soil samples, and also for soil and tomato. Results evidenced that these alternative fertilizers could be taken as important sources of Br, Ca, Ce, K, Na and Zn for the organic tomato culture.
Resumo:
The agricultural supplies used in the organic system to control pests and diseases as well as to fertilize soil are claimed to be beneficial to plants and innocuous to human health and to the environment. The chemical composition of six agricultural supplies commonly used in the organic tomato culture, was evaluated by instrumental neutron activation analysis (INAA). Results were compared to the maximum limits established by the Environment Control Agency of the Sao Paulo State (CETESB) and the Guidelines for Organic Quality Standard of Instituto Biodinamico (IBD). Concentrations above reference values were found for Co, Cr and Zn in compost, Cr and Zn in cattle manure and Zn in rice bran.
Resumo:
This work proposes a method based on both preprocessing and data mining with the objective of identify harmonic current sources in residential consumers. In addition, this methodology can also be applied to identify linear and nonlinear loads. It should be emphasized that the entire database was obtained through laboratory essays, i.e., real data were acquired from residential loads. Thus, the residential system created in laboratory was fed by a configurable power source and in its output were placed the loads and the power quality analyzers (all measurements were stored in a microcomputer). So, the data were submitted to pre-processing, which was based on attribute selection techniques in order to minimize the complexity in identifying the loads. A newer database was generated maintaining only the attributes selected, thus, Artificial Neural Networks were trained to realized the identification of loads. In order to validate the methodology proposed, the loads were fed both under ideal conditions (without harmonics), but also by harmonic voltages within limits pre-established. These limits are in accordance with IEEE Std. 519-1992 and PRODIST (procedures to delivery energy employed by Brazilian`s utilities). The results obtained seek to validate the methodology proposed and furnish a method that can serve as alternative to conventional methods.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.