952 resultados para Nonhomogeneous initial-boundary-value problems
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Core samples of calcareous sediments taken from above and below the proposed Cretaceous/Tertiary boundary (Sample 577-12-5, 130 cm) were examined for geochemical evidence of the mass extinctions and faunal successions that marked this period. The lipid compositions of the six core samples examined were virtually identical and were characterized by a large component of unresolved naphthenic hydrocarbons and a homologous series of an/mo-alkanes, both presumably of bacterial origin. The results of this preliminary study suggest that the lipids of sediments deposited over a several million year period encompassing the Cretaceous-Tertiary extinctions have been almost completely recycled by bacterial metabolism, which occurred under oxic depositional and/or diagenetic conditions and which left a unique bacterial signature with only minor traces of the original sedimentary lipids.
Resumo:
We analyzed the oxygen and carbon isotopic composition of planktonic and benthic foraminifers picked from 13 late Eocene to late Oligocene samples from DSDP Site 540 (23°49.73'N, 84°22.25'W, 2926 m water depth) from the Gulf of Mexico. An enrichment occurs in 18O of about 0.5 to 0.8 per mil in both benthic foraminifers and surface-dwelling planktonic foraminifers between the latest Eocene and early Oligocene. This early Oligocene maximum is followed by lower 18O values. A 1.2 per mil d13C decrease in both benthic and planktonic foraminiferal data occurs from the late Eocene to the late Oligocene. There is a correspondence of the 13C signal to deep-sea records; however, the amplitude of this change is greater than previously seen in deep-sea cores, possibly as a result of proximity to terrestrial sources of carbon. The covarying isotopic changes in both benthic and planktonic foraminifers suggest global causes, such as ice volume increases and increased terrestrial carbon input to the ocean. However, during the latter part of the record (early-late Oligocene), the increases in the benthic 18O without accompanying increases observed with planktonic foraminifers suggest that changes in only one part of the system occurred; one potential explanation being a decrease in bottom-water temperatures without concomitant changes in the surface waters. The 18O differences between species of planktonic foraminifers and the difference between planktonic and benthic 18O data indicate that diagenesis problems are minimal. These preliminary results are encouraging given that these cores are partially lithified.
Resumo:
This reconnaissance study was undertaken to determine whether the mass extinctions and faunal successions that mark the Cretaceous/Tertiary (K/T) boundary left a discernible molecular fossil record in the sediments of this period. Lipid signatures of sediments taken from above and below the K/T boundary were compared in core and outcrop samples taken from two locations: the U.S. east coast continental margin (western Atlantic Ocean, DSDP Site 605) and Stevns Klint, Denmark. Four calcareous sediments taken from above and below the K/T boundary in DSDP Hole 605, Section 605-66-1, revealed changing lipid signatures between above and below that are characterized by a large component of unresolved naphthenic hydrocarbons and a homologous series of n-alkanes ranging from Ci6 to C33. These lipid signatures are attributed to an influx of a terrestrial higher plant component and to bacterial reworking of the sediments under partially anoxic depositional and/or diagenetic conditions. The outcrop samples from Stevns Klint had extremely low concentrations of indigenous lipids. The fish clay at the K/T boundary contained traces of microbial hydrocarbons and fatty acids, whereas the carbonates above and below had only microbial fatty acids and additional terrestrial resin acids. The data from both sites indicate a perturbation in the deposition of lipid compound classes across the K/T boundary.
Resumo:
Free and "bound" long-chain alkenones (C37?2 and C37?3) in oxidized and unoxidized sections of four organic matter-rich Pliocene and Miocene Madeira Abyssal Plain turbidites (one from Ocean Drilling Program site 951B and three from site 952A) were analyzed to determine the effect of severe post depositional oxidation on the value of Uk'37. The profiles of both alkenones across the redox boundary show a preferential degradation of the C37?3 compared to the C37?2 compound. Because of the high initial Uk'37 values and the way of calculating the Uk'37 this degradation hardly influences the Uk'37 profiles. However, for lower Uk'37 values, measured selective degradation would increase Uk'37 up to 0.17 units, equivalent to 5°C. For most of the Uk'37 band-width, much smaller degradation already increases Uk'37 beyond the analytical error (0.017 units). Consequently, for interpreting the Uk'37 record in terms of past sea surface temperatures, selective degradation needs serious consideration.
Resumo:
Waterways have many more ties with society than as a medium for the transportation of goods alone. Waterway systems offer society many kinds of socio-economic value. Waterway authorities responsible for management and (re)development need to optimize the public benefits for the investments made. However, due to the many trade-offs in the system these agencies have multiple options for achieving this goal. Because they can invest resources in a great many different ways, they need a way to calculate the efficiency of the decisions they make. Transaction cost theory, and the analysis that goes with it, has emerged as an important means of justifying efficiency decisions in the economic arena. To improve our understanding of the value-creating and coordination problems for waterway authorities, such a framework is applied to this sector. This paper describes the findings for two cases, which reflect two common multi trade-off situations for waterway (re)development. Our first case study focuses on the Miami River, an urban revitalized waterway. The second case describes the Inner Harbour Navigation Canal in New Orleans, a canal and lock in an industrialized zone, in need of an upgrade to keep pace with market developments. The transaction cost framework appears to be useful in exposing a wide variety of value-creating opportunities and the resistances that come with it. These insights can offer infrastructure managers guidance on how to seize these opportunities.
Resumo:
Highly swellable polymer films doped with Ag nanoparticle aggregates (poly-SERS films) have been used to record very high signal:noise ratio, reproducible surface-enhanced (resonance) Raman (SER(R)S) spectra of in situ dried ink lines and their constituent dyes using both 633 and 785 nm excitation. These allowed the chemical origins of differences in the SERRS spectra of different inks to be determined. Initial investigation of pure samples of the 10 most common blue dyes showed that the dyes which had very similar chemical structures such as Patent Blue V and Patent Blue VF (which differ only by a single OH group) gave SERRS spectra in which the only indications that the dye structure had been changed were small differences in peak positions or relative intensities of the bands. SERRS studies of 13 gel pen inks were consistent with this observation. In some cases inks from different types of pens could be distinguished even though they were dominated by a single dye such as Victoria Blue B (Zebra Surari) or Victoria Blue BO (Pilot Acroball) because their predominant dye did not appear in other inks. Conversely, identical spectra were also recorded from different types of pens (Pilot G7, Zebra Z-grip) because they all had the same dominant Brilliant Blue G dye. Finally, some of the inks contained mixtures of dyes which could be separated by TLC and removed from the plate before being analysed with the same poly-SERS films. For example, the Pentel EnerGel ink pen was found to give TLC spots corresponding to Erioglaucine and Brilliant Blue G. Overall, this study has shown that the spectral differences between different inks which are based on chemically similar, but nonetheless distinct dyes, are extremely small, so very close matches between SERRS spectra are required for confident identification. Poly-SERS substrates can routinely provide the very stringent reproducibility and sensitivity levels required. This, coupled with the awareness of the reasons underlying the observed differences between similarly coloured inks allows a more confident assessment of the evidential value of inks SERS and should underpin adoption of this approach as a routine method for the forensic examination of inks.
Resumo:
Increased complexity in large design and manufacturing organisations requires improvements at the operations management (OM)–applied service (AS) interface areas to improve project effectiveness. The aim of this paper is explore the role of Lean in improving the longitudinal efficiency of the OM–AS interface within a large aerospace organisation using Lean principles and boundary spanning theory. The methodology was an exploratory longitudinal case approach including exploratory interviews (n = 21), focus groups (n = 2), facilitated action-research workshops (n = 2) and two trials or experiments using longitudinal data involving both OM and AS personnel working at the interface. The findings draw upon Lean principles and boundary spanning theory to guide and interpret the findings. It was found that misinterpretation, and forced implementation, of OM-based Lean terminology and practice in the OM–AS interface space led to delays and misplaced resources. Rather both OM and AS staff were challenged to develop a cross boundary understanding of Lean-based boundary (knowledge) objects in interpreting OM requests. The longitudinal findings from the experiments showed that the development of Lean Performance measurements and lean Value Stream constructs was more successful when these Lean constructs were treated as boundary (knowledge) objects requiring transformation over time to orchestrate improved effectiveness and in leading to consistent terminology and understanding between the OM–AS boundary spanning team.
Resumo:
The process of constituency boundary revision in Ireland, designed to satisfy what is perceived as a rigid requirement that a uniform deputy-population ratio be maintained across constituencies, has traditionally consumed a great deal of the time of politicians and officials. For almost two decades after a High Court ruling in 1961, the process was a political one, was highly contentious, and was marked by serious allegations of ministerial gerrymandering. The introduction in 1979 of constituency commissions made up of officials neutralised, for the most part, charges that the system had become too politicised, but it continued the process of micro-management of constituency boundaries. This article suggests that the continuing problems caused by this system – notably, the permanently changing nature of constituency boundaries and resulting difficulties of geographical identification – could be resolved by reversion to the procedure that is normal in proportional representation systems: periodic post-census allocation of seats to constituencies whose boundaries are based on those of recognised local government units and which are stable over time. This reform, replacing the principle of redistricting by the principle of reapportionment, would result in more recognisable constituencies, more predictable boundary trajectories over time, and a more efficient, fairer, and speedier process of revision.
Resumo:
Collaboration in the public sector is imperative to achieve e-government objectives such as improved efficiency and effectiveness of public administration and improved quality of public services. Collaboration across organizational and institutional boundaries requires public organizations to share e-government systems and services through for instance, interoperable information technology and processes. Demands on public organizations to become more open also require that public organizations adopt new collaborative approaches for inviting and engaging citizens in governmental activities. E-government related collaboration in the public sector is challenging, however, and collaboration initiatives often fail. Public organizations need to learn how to collaborate since forms of e-government collaboration and expected outcomes are mostly unknown. How public organizations can collaborate and the expected outcomes are thus investigated in this thesis by studying multiple collaboration cases on the acquisition and implementation of a particular e-government investment (digital archive). This thesis also investigates how e-government collaboration can be facilitated through artifacts. It is done through a case study, where objects that cross boundaries between collaborating communities in the public sector are studied, and by designing a configurable process model integrating several processes for social services. By using design science, this thesis also investigates how an m-government solution that facilitates collaboration between citizens and public organizations can be designed. The thesis contributes to literature through describing five different modes of interorganizational collaboration in the public sector and the expected benefits from each mode. It also contributes with an instantiation of a configurable process model supporting three open social e-services and with evidence of how it can facilitate collaboration. This thesis further describes how boundary objects facilitate collaboration between different communities in an open government design initiative. It contributes with a designed mobile government solution, thereby providing proof of concept and initial design implications for enabling collaboration with citizens through citizen sourcing (outsourcing a governmental activity to citizens through an open call). This thesis also identifies research streams within e-government collaboration research through a literature review and the thesis contributions are related to the identified research streams. This thesis gives directions for future research by suggesting that future research should focus further on understanding e-government collaboration and how information and communication technology can facilitate collaboration in the public sector. It is suggested that further research should investigate m-government solutions to form design theories. Future research should also examine how value can be co-created in e-government collaboration.
Resumo:
Measures of impact of Higher Education have often neglected the Chinese student view, despite the importance of these students to the UK and Chinese economy. This research paper details the findings of a quantitative survey that was purposively distributed to Chinese graduates who enrolled at the University of Worcester on the Business Management degree between 2004-2011 (n=49). Analysis has been conducted on their skill development throughout their degree, their skill usage in different employment contexts, the value of their degree, and gender differences in skill development and usage. Discrepancies between skill development and usage, between males and females, and with previous research findings are discussed. Future research directions are also specified.
Resumo:
The removal of trade impediments is expected to cause companies to integrate more of their operations among countries; however, experience shows that behavioral factors often impede the requisite cooperation and commitment among managers from different countries. This paper discusses these behavioral problems from a national perspective and examines an approach to integration, value networks, which is not bounded by nation-states and their differences or similarities.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We consider a parametric semilinear Dirichlet problem driven by the Laplacian plus an indefinite unbounded potential and with a reaction of superdifissive type. Using variational and truncation techniques, we show that there exists a critical parameter value λ_{∗}>0 such that for all λ> λ_{∗} the problem has least two positive solutions, for λ= λ_{∗} the problem has at least one positive solutions, and no positive solutions exist when λ∈(0,λ_{∗}). Also, we show that for λ≥ λ_{∗} the problem has a smallest positive solution.