17 resultados para University Models

em DRUM (Digital Repository at the University of Maryland)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study sought to understand the phenomenon of faculty involvement in indirect cost under-recovery. The focus of the study was on public research university STEM (science, technology, engineering and mathematics) faculty, and their perspectives on, and behavior towards, a higher education fiscal policy. The explanatory scheme was derived from anthropological theory, and incorporated organizational culture, faculty socialization, and political bargaining models in the conceptual framework. This study drew on two key assumptions. The first assumption was that faculty understanding of, and behavior toward, indirect cost recovery represents values, beliefs, and choices drawn from the distinct professional socialization and distinct culture of faculty. The second assumption was that when faculty and institutional administrators are in conflict over indirect cost recovery, the resultant formal administrative decision comes about through political bargaining over critical resources. The research design was a single site, qualitative case study with a focus on learning the meaning of the phenomenon as understood by the informants. In this study the informants were tenured and tenure track research university faculty in the STEM fields who were highly successful at obtaining Federal sponsored research funds, with individual sponsored research portfolios of at least one million dollars. The data consisted of 11 informant interviews, bolstered by documentary evidence. The findings indicated that faculty socialization and organizational culture were the most dominant themes, while political bargaining emerged as significantly less prominent. Public research university STEM faculty are most concerned about the survival of their research programs and the discovery facilitated by their research programs. They resort to conjecture when confronted by the issue of indirect cost recovery. The findings direct institutional administrators to consider less emphasis on compliance and hierarchy when working with expert professionals such as science faculty. Instead a more effective focus might be on communication and clarity in budget processes and organizational decision-making, and a concentration on critical administrative support that can relieve faculty administrative burdens. For higher education researchers, the findings suggest that we need to create more sophisticated models to help us understand organizations dependent on expert professionals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Fall 2015, the Engineering and Physical Science Library (EPSL) began lending anatomical models as part of its course reserves program. EPSL received a partial skeleton and two muscle model figures from instructors of BSCI105. These models circulate for 4 hours at a time and are generally used by small, collaborative groups of students in the library. This poster will look at the challenges and rewards for adding these items to EPSL’s course reserves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are hundreds of millions of songs available to the public, necessitating the use of music recommendation systems to discover new music. Currently, such systems account for only the quantitative musical elements of songs, failing to consider aspects of human perception of music and alienating the listener’s individual preferences from recommendations. Our research investigated the relationships between perceptual elements of music, represented by the MUSIC model, with computational musical features generated through The Echo Nest, to determine how a psychological representation of music preference can be incorporated into recommendation systems to embody an individual’s music preferences. Our resultant model facilitates computation of MUSIC factors using The Echo Nest features, and can potentially be integrated into recommendation systems for improved performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Duchenne muscular dystrophy (DMD) is a neuromuscular disease caused by mutations in the dystrophin gene. DMD is clinically characterized by severe, progressive and irreversible loss of muscle function, in which most patients lose the ability to walk by their early teens and die by their early 20’s. Impaired intracellular calcium (Ca2+) regulation and activation of cell degradation pathways have been proposed as key contributors to DMD disease progression. This dissertation research consists of three studies investigating the role of intracellular Ca2+ in skeletal muscle dysfunction in different mouse models of DMD. Study one evaluated the role of Ca2+-activated enzymes (proteases) that activate protein degradation in excitation-contraction (E-C) coupling failure following repeated contractions in mdx and dystrophin-utrophin null (mdx/utr-/-) mice. Single muscle fibers from mdx/utr-/- mice had greater E-C coupling failure following repeated contractions compared to fibers from mdx mice. Moreover, protease inhibition during these contractions was sufficient to attenuate E-C coupling failure in muscle fibers from both mdx and mdx/utr-/- mice. Study two evaluated the effects of overexpressing the Ca2+ buffering protein sarcoplasmic/endoplasmic reticulum Ca2+-ATPase 1 (SERCA1) in skeletal muscles from mdx and mdx/utr-/- mice. Overall, SERCA1 overexpression decreased muscle damage and protected the muscle from contraction-induced injury in mdx and mdx/utr-/- mice. In study three, the cellular mechanisms underlying the beneficial effects of SERCA1 overexpression in mdx and mdx/utr-/- mice were investigated. SERCA1 overexpression attenuated calpain activation in mdx muscle only, while partially attenuating the degradation of the calpain target desmin in mdx/utr-/- mice. Additionally, SERCA1 overexpression decreased the SERCA-inhibitory protein sarcolipin in mdx muscle but did not alter levels of Ca2+ regulatory proteins (parvalbumin and calsequestrin) in either dystrophic model. Lastly, SERCA1 overexpression blunted the increase in endoplasmic reticulum stress markers Grp78/BiP in mdx mice and C/EBP homologous protein (CHOP) in mdx and mdx/utr-/- mice. Overall, findings from the studies presented in this dissertation provide new insight into the role of Ca2+ in muscle dysfunction and damage in different dystrophic mouse models. Further, these findings support the overall strategy for improving intracellular Ca2+ control for the development of novel therapies for DMD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The American woodcock (Scolopax minor) population index in North America has declined 0.9% a year since 1968 prompting managers to identify priority information and management needs for the species (Sauer et al 2008). Managers identified a need for a population model that better informs on the status of American woodcock populations (Case et al. 2010). Population reconstruction techniques use long-term age-at-harvest data and harvest effort to estimate abundances with error estimates. Four new models were successfully developed using survey data (1999 to 2013). The optimal model estimates sex specific harvest probability for adult females at 0.148 (SE = 0.017) and all other age-sex cohorts at 0.082 (SE = 0.008) for the most current year 2013. The model estimated a yearly survival rate of 0.528 (SE = 0.008). Total abundance ranged from 5,206,000 woodcock in 2007 to 6,075,800 woodcock in 1999. This study represents the first population estimates of woodcock populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiments with ultracold atoms in optical lattice have become a versatile testing ground to study diverse quantum many-body Hamiltonians. A single-band Bose-Hubbard (BH) Hamiltonian was first proposed to describe these systems in 1998 and its associated quantum phase-transition was subsequently observed in 2002. Over the years, there has been a rapid progress in experimental realizations of more complex lattice geometries, leading to more exotic BH Hamiltonians with contributions from excited bands, and modified tunneling and interaction energies. There has also been interesting theoretical insights and experimental studies on “un- conventional” Bose-Einstein condensates in optical lattices and predictions of rich orbital physics in higher bands. In this thesis, I present our results on several multi- band BH models and emergent quantum phenomena. In particular, I study optical lattices with two local minima per unit cell and show that the low energy states of a multi-band BH Hamiltonian with only pairwise interactions is equivalent to an effec- tive single-band Hamiltonian with strong three-body interactions. I also propose a second method to create three-body interactions in ultracold gases of bosonic atoms in a optical lattice. In this case, this is achieved by a careful cancellation of two contributions in the pair-wise interaction between the atoms, one proportional to the zero-energy scattering length and a second proportional to the effective range. I subsequently study the physics of Bose-Einstein condensation in the second band of a double-well 2D lattice and show that the collision aided decay rate of the con- densate to the ground band is smaller than the tunneling rate between neighboring unit cells. Finally, I propose a numerical method using the discrete variable repre- sentation for constructing real-valued Wannier functions localized in a unit cell for optical lattices. The developed numerical method is general and can be applied to a wide array of optical lattice geometries in one, two or three dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computer vision system that has to interact in natural language needs to understand the visual appearance of interactions between objects along with the appearance of objects themselves. Relationships between objects are frequently mentioned in queries of tasks like semantic image retrieval, image captioning, visual question answering and natural language object detection. Hence, it is essential to model context between objects for solving these tasks. In the first part of this thesis, we present a technique for detecting an object mentioned in a natural language query. Specifically, we work with referring expressions which are sentences that identify a particular object instance in an image. In many referring expressions, an object is described in relation to another object using prepositions, comparative adjectives, action verbs etc. Our proposed technique can identify both the referred object and the context object mentioned in such expressions. Context is also useful for incrementally understanding scenes and videos. In the second part of this thesis, we propose techniques for searching for objects in an image and events in a video. Our proposed incremental algorithms use the context from previously explored regions to prioritize the regions to explore next. The advantage of incremental understanding is restricting the amount of computation time and/or resources spent for various detection tasks. Our first proposed technique shows how to learn context in indoor scenes in an implicit manner and use it for searching for objects. The second technique shows how explicitly written context rules of one-on-one basketball can be used to sequentially detect events in a game.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.