952 resultados para subgrid-scale models
Resumo:
Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools.
Resumo:
The resilience of a social-ecological system is measured by its ability to retain core functionality when subjected to perturbation. Resilience is contextually dependent on the state of system components, the complex interactions among these components, and the timing, location, and magnitude of perturbations. The stability landscape concept provides a useful framework for considering resilience within the specified context of a particular social-ecological system but has proven difficult to operationalize. This difficulty stems largely from the complex, multidimensional nature of the systems of interest and uncertainty in system response. Agent-based models are an effective methodology for understanding how cross-scale processes within and across social and ecological domains contribute to overall system resilience. We present the results of a stylized model of agricultural land use in a small watershed that is typical of the Midwestern United States. The spatially explicit model couples land use, biophysical models, and economic drivers with an agent-based model to explore the effects of perturbations and policy adaptations on system outcomes. By applying the coupled modeling approach within the resilience and stability landscape frameworks, we (1) estimate the sensitivity of the system to context-specific perturbations, (2) determine potential outcomes of those perturbations, (3) identify possible alternative states within state space, (4) evaluate the resilience of system states, and (5) characterize changes in system-scale resilience brought on by changes in individual land use decisions.
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
This dissertation focuses on design challenges caused by secondary impacts to printed wiring assemblies (PWAs) within hand-held electronics due to accidental drop or impact loading. The continuing increase of functionality, miniaturization and affordability has resulted in a decrease in the size and weight of handheld electronic products. As a result, PWAs have become thinner and the clearances between surrounding structures have decreased. The resulting increase in flexibility of the PWAs in combination with the reduced clearances requires new design rules to minimize and survive possible internal collisions impacts between PWAs and surrounding structures. Such collisions are being termed ‘secondary impact’ in this study. The effect of secondary impact on board-level drop reliability of printed wiring boards (PWBs) assembled with MEMS microphone components, is investigated using a combination of testing, response and stress analysis, and damage modeling. The response analysis is conducted using a combination of numerical finite element modeling and simplified analytic models for additional parametric sensitivity studies.
Resumo:
Plantings of mixed native species (termed 'environmental plantings') are increasingly being established for carbon sequestration whilst providing additional environmental benefits such as biodiversity and water quality. In Australia, they are currently one of the most common forms of reforestation. Investment in establishing and maintaining such plantings relies on having a cost-effective modelling approach to providing unbiased estimates of biomass production and carbon sequestration rates. In Australia, the Full Carbon Accounting Model (FullCAM) is used for both national greenhouse gas accounting and project-scale sequestration activities. Prior to undertaking the work presented here, the FullCAM tree growth curve was not calibrated specifically for environmental plantings and generally under-estimated their biomass. Here we collected and analysed above-ground biomass data from 605 mixed-species environmental plantings, and tested the effects of several planting characteristics on growth rates. Plantings were then categorised based on significant differences in growth rates. Growth of plantings differed between temperate and tropical regions. Tropical plantings were relatively uniform in terms of planting methods and their growth was largely related to stand age, consistent with the un-calibrated growth curve. However, in temperate regions where plantings were more variable, key factors influencing growth were planting width, stand density and species-mix (proportion of individuals that were trees). These categories provided the basis for FullCAM calibration. Although the overall model efficiency was only 39-46%, there was nonetheless no significant bias when the model was applied to the various planting categories. Thus, modelled estimates of biomass accumulation will be reliable on average, but estimates at any particular location will be uncertain, with either under- or over-prediction possible. When compared with the un-calibrated yield curves, predictions using the new calibrations show that early growth is likely to be more rapid and total above-ground biomass may be higher for many plantings at maturity. This study has considerably improved understanding of the patterns of growth in different types of environmental plantings, and in modelling biomass accumulation in young (<25. years old) plantings. However, significant challenges remain to understand longer-term stand dynamics, particularly with temporal changes in stand density and species composition. © 2014.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Resumo:
Current coastal-evolution models generally lack the ability to accurately predict bed level change in shallow (<~2 m) water, which is, at least partly, due to the preclusion of the effect of surface-induced turbulence on sand suspension and transport. As a first step to remedy this situation, we investigated the vertical structure of turbulence in the surf and swash zone using measurements collected under random shoaling and plunging waves on a steep (initially 1:15) field-scale sandy laboratory beach. Seaward of the swash zone, turbulence was measured with a vertical array of three Acoustic Doppler Velocimeters (ADVs), while in the swash zone two vertically spaced acoustic doppler velocimeter profilers (Vectrino profilers) were applied. The vertical turbulence structure evolves from bottom-dominated to approximately vertically uniform with an increase in the fraction of breaking waves to ~ 50%. In the swash zone, the turbulence is predominantly bottom-induced during the backwash and shows a homogeneous turbulence profile during uprush. We further find that the instantaneous turbulence kinetic energy is phase-coupled with the short-wave orbital motion under the plunging breakers, with higher levels shortly after the reversal from offshore to onshore motion (i.e. wavefront).
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
This research develops four case studies on small-scale fisheries in Central America located within indigenous territories. The ngöbe Bugle Conte Burica Territory in the south of Costa Rica, the Garífuna territory in nueva Armenia Honduras, the Rama territory in Nicaragua and the ngöbe Bugle territory in Bocas del Toro, Panamá. This is one of the first studies focusing on indigenous territories, artisanal fisheries and SSF guidelines. The cases are a first approach to discussing and analyzing relevant social and human rights issues related to conservation of marine resources and fisheries management in these territories. The cases discussed between other issues of interest, the relationships between marine protected areas under different governance models and issues related to the strengthening of the small-scale fisheries of these indigenous populations and marine fishing territories. They highlight sustainability, governance, land tenure and access to fishing resources, gender, traditional knowledge importance and new challenges as climate change.
Resumo:
Magnetic fields are ubiquitous in galaxy cluster atmospheres and have a variety of astrophysical and cosmological consequences. Magnetic fields can contribute to the pressure support of clusters, affect thermal conduction, and modify the evolution of bubbles driven by active galactic nuclei. However, we currently do not fully understand the origin and evolution of these fields throughout cosmic time. Furthermore, we do not have a general understanding of the relationship between magnetic field strength and topology and other cluster properties, such as mass and X-ray luminosity. We can now begin to answer some of these questions using large-scale cosmological magnetohydrodynamic (MHD) simulations of the formation of galaxy clusters including the seeding and growth of magnetic fields. Using large-scale cosmological simulations with the FLASH code combined with a simplified model of the acceleration of cosmic rays responsible for the generation of radio halos, we find that the galaxy cluster frequency distribution and expected number counts of radio halos from upcoming low-frequency sur- veys are strongly dependent on the strength of magnetic fields. Thus, a more complete understanding of the origin and evolution of magnetic fields is necessary to understand and constrain models of diffuse synchrotron emission from clusters. One favored model for generating magnetic fields is through the amplification of weak seed fields in active galactic nuclei (AGN) accretion disks and their subsequent injection into cluster atmospheres via AGN-driven jets and bubbles. However, current large-scale cosmological simulations cannot directly include the physical processes associated with the accretion and feedback processes of AGN or the seeding and merging of the associated SMBHs. Thus, we must include these effects as subgrid models. In order to carefully study the growth of magnetic fields in clusters via AGN-driven outflows, we present a systematic study of SMBH and AGN subgrid models. Using dark-matter only cosmological simulations, we find that many important quantities, such as the relationship between SMBH mass and galactic bulge velocity dispersion and the merger rate of black holes, are highly sensitive to the subgrid model assumptions of SMBHs. In addition, using MHD calculations of an isolated cluster, we find that magnetic field strengths, extent, topology, and relationship to other gas quantities such as temperature and density are also highly dependent on the chosen model of accretion and feedback. We use these systematic studies of SMBHs and AGN inform and constrain our choice of subgrid models, and we use those results to outline a fully cosmological MHD simulation to study the injection and growth of magnetic fields in clusters of galaxies. This simulation will be the first to study the birth and evolution of magnetic fields using a fully closed accretion-feedback cycle, with as few assumptions as possible and a clearer understanding of the effects of the various parameter choices.
Resumo:
Insights into the genomic adaptive traits of Treponema pallidum, the causative bacterium of syphilis, have long been hampered due to the absence of in vitro culture models and the constraints associated with its propagation in rabbits. Here, we have bypassed the culture bottleneck by means of a targeted strategy never applied to uncultivable bacterial human pathogens to directly capture whole-genome T. pallidum data in the context of human infection. This strategy has unveiled a scenario of discreet T. pallidum interstrain single-nucleotide-polymorphism-based microevolution, contrasting with a rampant within-patient genetic heterogeneity mainly targeting multiple phase-variable loci and a major antigen-coding gene (tprK). TprK demonstrated remarkable variability and redundancy, intra- and interpatient, suggesting ongoing parallel adaptive diversification during human infection. Some bacterial functions (for example, flagella- and chemotaxis-associated) were systematically targeted by both inter- and intrastrain single nucleotide polymorphisms, as well as by ongoing within-patient phase variation events. Finally, patient-derived genomes possess mutations targeting a penicillin-binding protein coding gene (mrcA) that had never been reported, unveiling it as a candidate target to investigate the impact on the susceptibility to penicillin. Our findings decode the major genetic mechanisms by which T. pallidum promotes immune evasion and survival, and demonstrate the exceptional power of characterizing evolving pathogen subpopulations during human infection.
Resumo:
The presence of gap junction coupling among neurons of the central nervous systems has been appreciated for some time now. In recent years there has been an upsurge of interest from the mathematical community in understanding the contribution of these direct electrical connections between cells to large-scale brain rhythms. Here we analyze a class of exactly soluble single neuron models, capable of producing realistic action potential shapes, that can be used as the basis for understanding dynamics at the network level. This work focuses on planar piece-wise linear models that can mimic the firing response of several different cell types. Under constant current injection the periodic response and phase response curve (PRC) is calculated in closed form. A simple formula for the stability of a periodic orbit is found using Floquet theory. From the calculated PRC and the periodic orbit a phase interaction function is constructed that allows the investigation of phase-locked network states using the theory of weakly coupled oscillators. For large networks with global gap junction connectivity we develop a theory of strong coupling instabilities of the homogeneous, synchronous and splay state. For a piece-wise linear caricature of the Morris-Lecar model, with oscillations arising from a homoclinic bifurcation, we show that large amplitude oscillations in the mean membrane potential are organized around such unstable orbits.
Resumo:
This study presents the development and analysis of the psychometric properties of the Deviant Behavior Variety Scale (DBVS). Participants were 861 Portuguese adolescents (54 % female), aged between 12 and 19 years old. Two alternative models were tested using Confirmatory Factor Analysis. Although both models showed good fit indexes, the two-factor model didn’t presented discriminant validity. Further results provided evidence for the factorial and the convergent validity of the single-factor structure of the DVBS, which has also shown good internal consistency. Criterion validity was evaluated through the association with related variables, such as age and school failure, as well as the scale’s ability to capture group differences, namely between genders and school retentions, and finally by comparing a sub-group of convicted adolescents with a group of non-convicted ones regarding their engagement in delinquent activities. Overall, the scale presented good psychometric properties, with results supporting that the DBVS is a valid and reliable self-reported measure to evaluate adolescents’ involvement in deviance.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.