19 resultados para DIGITAL ELEVATION MODELS

em DRUM (Digital Repository at the University of Maryland)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using scientific methods in the humanities is at the forefront of objective literary analysis. However, processing big data is particularly complex when the subject matter is qualitative rather than numerical. Large volumes of text require specialized tools to produce quantifiable data from ideas and sentiments. Our team researched the extent to which tools such as Weka and MALLET can test hypotheses about qualitative information. We examined the claim that literary commentary exists within political environments and used US periodical articles concerning Russian literature in the early twentieth century as a case study. These tools generated useful quantitative data that allowed us to run stepwise binary logistic regressions. These statistical tests allowed for time series experiments using sea change and emergency models of history, as well as classification experiments with regard to author characteristics, social issues, and sentiment expressed. Both types of experiments supported our claim with varying degrees, but more importantly served as a definitive demonstration that digitally enhanced quantitative forms of analysis can apply to qualitative data. Our findings set the foundation for further experiments in the emerging field of digital humanities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Fall 2015, the Engineering and Physical Science Library (EPSL) began lending anatomical models as part of its course reserves program. EPSL received a partial skeleton and two muscle model figures from instructors of BSCI105. These models circulate for 4 hours at a time and are generally used by small, collaborative groups of students in the library. This poster will look at the challenges and rewards for adding these items to EPSL’s course reserves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are hundreds of millions of songs available to the public, necessitating the use of music recommendation systems to discover new music. Currently, such systems account for only the quantitative musical elements of songs, failing to consider aspects of human perception of music and alienating the listener’s individual preferences from recommendations. Our research investigated the relationships between perceptual elements of music, represented by the MUSIC model, with computational musical features generated through The Echo Nest, to determine how a psychological representation of music preference can be incorporated into recommendation systems to embody an individual’s music preferences. Our resultant model facilitates computation of MUSIC factors using The Echo Nest features, and can potentially be integrated into recommendation systems for improved performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Duchenne muscular dystrophy (DMD) is a neuromuscular disease caused by mutations in the dystrophin gene. DMD is clinically characterized by severe, progressive and irreversible loss of muscle function, in which most patients lose the ability to walk by their early teens and die by their early 20’s. Impaired intracellular calcium (Ca2+) regulation and activation of cell degradation pathways have been proposed as key contributors to DMD disease progression. This dissertation research consists of three studies investigating the role of intracellular Ca2+ in skeletal muscle dysfunction in different mouse models of DMD. Study one evaluated the role of Ca2+-activated enzymes (proteases) that activate protein degradation in excitation-contraction (E-C) coupling failure following repeated contractions in mdx and dystrophin-utrophin null (mdx/utr-/-) mice. Single muscle fibers from mdx/utr-/- mice had greater E-C coupling failure following repeated contractions compared to fibers from mdx mice. Moreover, protease inhibition during these contractions was sufficient to attenuate E-C coupling failure in muscle fibers from both mdx and mdx/utr-/- mice. Study two evaluated the effects of overexpressing the Ca2+ buffering protein sarcoplasmic/endoplasmic reticulum Ca2+-ATPase 1 (SERCA1) in skeletal muscles from mdx and mdx/utr-/- mice. Overall, SERCA1 overexpression decreased muscle damage and protected the muscle from contraction-induced injury in mdx and mdx/utr-/- mice. In study three, the cellular mechanisms underlying the beneficial effects of SERCA1 overexpression in mdx and mdx/utr-/- mice were investigated. SERCA1 overexpression attenuated calpain activation in mdx muscle only, while partially attenuating the degradation of the calpain target desmin in mdx/utr-/- mice. Additionally, SERCA1 overexpression decreased the SERCA-inhibitory protein sarcolipin in mdx muscle but did not alter levels of Ca2+ regulatory proteins (parvalbumin and calsequestrin) in either dystrophic model. Lastly, SERCA1 overexpression blunted the increase in endoplasmic reticulum stress markers Grp78/BiP in mdx mice and C/EBP homologous protein (CHOP) in mdx and mdx/utr-/- mice. Overall, findings from the studies presented in this dissertation provide new insight into the role of Ca2+ in muscle dysfunction and damage in different dystrophic mouse models. Further, these findings support the overall strategy for improving intracellular Ca2+ control for the development of novel therapies for DMD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maps depicting spatial pattern in the stability of summer greenness could advance understanding of how forest ecosystems will respond to global changes such as a longer growing season. Declining summer greenness, or “greendown”, is spectrally related to declining near-infrared reflectance and is observed in most remote sensing time series to begin shortly after peak greenness at the end of spring and extend until the beginning of leaf coloration in autumn,. Understanding spatial patterns in the strength of greendown has recently become possible with the advancement of Landsat phenology products, which show that greendown patterns vary at scales appropriate for linking these patterns to proposed environmental forcing factors. This study tested two non-mutually exclusive hypotheses for how leaf measurements and environmental factors correlate with greendown and decreasing NIR reflectance across sites. At the landscape scale, we used linear regression to test the effects of maximum greenness, elevation, slope, aspect, solar irradiance and canopy rugosity on greendown. Secondly, we used leaf chemical traits and reflectance observations to test the effect of nitrogen availability and intrinsic water use efficiency on leaf-level greendown, and landscape-level greendown measured from Landsat. The study was conducted using Quercus alba canopies across 21 sites of an eastern deciduous forest in North America between June and August 2014. Our linear model explained greendown variance with an R2=0.47 with maximum greenness as the greatest model effect. Subsequent models excluding one model effect revealed elevation and aspect were the two topographic factors that explained the greatest amount of greendown variance. Regression results also demonstrated important interactions between all three variables, with the greatest interaction showing that aspect had greater influence on greendown at sites with steeper slopes. Leaf-level reflectance was correlated with foliar δ13C (proxy for intrinsic water use efficiency), but foliar δ13C did not translate into correlations with landscape-level variation in greendown from Landsat. Therefore, we conclude that Landsat greendown is primarily indicative of landscape position, with a small effect of canopy structure, and no measureable effect of leaf reflectance. With this understanding of Landsat greendown we can better explain the effects of landscape factors on vegetation reflectance and perhaps on phenology, which would be very useful for studying phenology in the context of global climate change

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The American woodcock (Scolopax minor) population index in North America has declined 0.9% a year since 1968 prompting managers to identify priority information and management needs for the species (Sauer et al 2008). Managers identified a need for a population model that better informs on the status of American woodcock populations (Case et al. 2010). Population reconstruction techniques use long-term age-at-harvest data and harvest effort to estimate abundances with error estimates. Four new models were successfully developed using survey data (1999 to 2013). The optimal model estimates sex specific harvest probability for adult females at 0.148 (SE = 0.017) and all other age-sex cohorts at 0.082 (SE = 0.008) for the most current year 2013. The model estimated a yearly survival rate of 0.528 (SE = 0.008). Total abundance ranged from 5,206,000 woodcock in 2007 to 6,075,800 woodcock in 1999. This study represents the first population estimates of woodcock populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiments with ultracold atoms in optical lattice have become a versatile testing ground to study diverse quantum many-body Hamiltonians. A single-band Bose-Hubbard (BH) Hamiltonian was first proposed to describe these systems in 1998 and its associated quantum phase-transition was subsequently observed in 2002. Over the years, there has been a rapid progress in experimental realizations of more complex lattice geometries, leading to more exotic BH Hamiltonians with contributions from excited bands, and modified tunneling and interaction energies. There has also been interesting theoretical insights and experimental studies on “un- conventional” Bose-Einstein condensates in optical lattices and predictions of rich orbital physics in higher bands. In this thesis, I present our results on several multi- band BH models and emergent quantum phenomena. In particular, I study optical lattices with two local minima per unit cell and show that the low energy states of a multi-band BH Hamiltonian with only pairwise interactions is equivalent to an effec- tive single-band Hamiltonian with strong three-body interactions. I also propose a second method to create three-body interactions in ultracold gases of bosonic atoms in a optical lattice. In this case, this is achieved by a careful cancellation of two contributions in the pair-wise interaction between the atoms, one proportional to the zero-energy scattering length and a second proportional to the effective range. I subsequently study the physics of Bose-Einstein condensation in the second band of a double-well 2D lattice and show that the collision aided decay rate of the con- densate to the ground band is smaller than the tunneling rate between neighboring unit cells. Finally, I propose a numerical method using the discrete variable repre- sentation for constructing real-valued Wannier functions localized in a unit cell for optical lattices. The developed numerical method is general and can be applied to a wide array of optical lattice geometries in one, two or three dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computer vision system that has to interact in natural language needs to understand the visual appearance of interactions between objects along with the appearance of objects themselves. Relationships between objects are frequently mentioned in queries of tasks like semantic image retrieval, image captioning, visual question answering and natural language object detection. Hence, it is essential to model context between objects for solving these tasks. In the first part of this thesis, we present a technique for detecting an object mentioned in a natural language query. Specifically, we work with referring expressions which are sentences that identify a particular object instance in an image. In many referring expressions, an object is described in relation to another object using prepositions, comparative adjectives, action verbs etc. Our proposed technique can identify both the referred object and the context object mentioned in such expressions. Context is also useful for incrementally understanding scenes and videos. In the second part of this thesis, we propose techniques for searching for objects in an image and events in a video. Our proposed incremental algorithms use the context from previously explored regions to prioritize the regions to explore next. The advantage of incremental understanding is restricting the amount of computation time and/or resources spent for various detection tasks. Our first proposed technique shows how to learn context in indoor scenes in an implicit manner and use it for searching for objects. The second technique shows how explicitly written context rules of one-on-one basketball can be used to sequentially detect events in a game.