998 resultados para Maximal Models
Resumo:
A Linear Processing Complex Orthogonal Design (LPCOD) is a p x n matrix epsilon, (p >= n) in k complex indeterminates x(1), x(2),..., x(k) such that (i) the entries of epsilon are complex linear combinations of 0, +/- x(i), i = 1,..., k and their conjugates, (ii) epsilon(H)epsilon = D, where epsilon(H) is the Hermitian (conjugate transpose) of epsilon and D is a diagonal matrix with the (i, i)-th diagonal element of the form l(1)((i))vertical bar x(1)vertical bar(2) + l(2)((i))vertical bar x(2)vertical bar(2)+...+ l(k)((i))vertical bar x(k)vertical bar(2) where l(j)((i)), i = 1, 2,..., n, j = 1, 2,...,k are strictly positive real numbers and the condition l(1)((i)) = l(2)((i)) = ... = l(k)((i)), called the equal-weights condition, holds for all values of i. For square designs it is known. that whenever a LPCOD exists without the equal-weights condition satisfied then there exists another LPCOD with identical parameters with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1. This implies that the maximum possible rate for square LPCODs without the equal-weights condition is the same as that or square LPCODs with equal-weights condition. In this paper, this result is extended to a subclass of non-square LPCODs. It is shown that, a set of sufficient conditions is identified such that whenever a non-square (p > n) LPCOD satisfies these sufficient conditions and do not satisfy the equal-weights condition, then there exists another LPCOD with the same parameters n, k and p in the same complex indeterminates with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
Within Australia, there have been many attempts to pass voluntary euthanasia (VE) or physician-assisted suicide (PAS) legislation. From 16 June 1993 until the date of writing, 51 Bills have been introduced into Australian parliaments dealing with legalising VE or PAS. Despite these numerous attempts, the only successful Bill was the Rights of the Terminally Ill Act 1995 (NT), which was enacted in the Northern Territory, but a short time later overturned by the controversial Euthanasia Laws Act 1997 (Cth). Yet, in stark contrast to the significant political opposition, for decades Australian public opinion has overwhelmingly supported law reform legalising VE or PAS. While there is ongoing debate in Australia, both through public discourse and scholarly publications, about the merits and dangers of reform in this field, there has been remarkably little analysis of the numerous legislative attempts to reform the law, and the context in which those reform attempts occurred. The aim of this article is to better understand the reform landscape in Australia over the past two decades. The information provided in this article will better equip Australians, both politicians and the general public, to have a more nuanced understanding of the political context in which the euthanasia debate has been and is occurring. It will also facilitate a more informed debate in the future.
Resumo:
This article presents and evaluates Quantum Inspired models of Target Activation using Cued-Target Recall Memory Modelling over multiple sources of Free Association data. Two components were evaluated: Whether Quantum Inspired models of Target Activation would provide a better framework than their classical psychological counterparts and how robust these models are across the different sources of Free Association data. In previous work, a formal model of cued-target recall did not exist and as such Target Activation was unable to be assessed directly. Further to that, the data source used was suspected of suffering from temporal and geographical bias. As a consequence, Target Activation was measured against cued-target recall data as an approximation of performance. Since then, a formal model of cued-target recall (PIER3) has been developed [10] with alternative sources of data also becoming available. This allowed us to directly model target activation in cued-target recall with human cued-target recall pairs and use multiply sources of Free Association Data. Featural Characteristics known to be important to Target Activation were measured for each of the data sources to identify any major differences that may explain variations in performance for each of the models. Each of the activation models were used in the PIER3 memory model for each of the data sources and was benchmarked against cued-target recall pairs provided by the University of South Florida (USF). Two methods where used to evaluate performance. The first involved measuring the divergence between the sets of results using the Kullback Leibler (KL) divergence with the second utilizing a previous statistical analysis of the errors [9]. Of the three sources of data, two were sourced from human subjects being the USF Free Association Norms and the University of Leuven (UL) Free Association Networks. The third was sourced from a new method put forward by Galea and Bruza, 2015 in which pseudo Free Association Networks (Corpus Based Association Networks - CANs) are built using co-occurrence statistics on large text corpus. It was found that the Quantum Inspired Models of Target Activation not only outperformed the classical psychological model but was more robust across a variety of data sources.
Resumo:
This paper presents the results of shaking table tests on geotextile-reinforced wrap-faced soil-retaining walls. Construction of model retaining walls in a laminar box mounted on a shaking table, instrumentation, and results from the shaking table tests are discussed in detail. The base motion parameters, surcharge pressure and number of reinforcing layers are varied in different model tests. It is observed from these tests that the response of the wrap-faced soil-retaining walls is significantly affected by the base acceleration levels, frequency of shaking, quantity of reinforcement and magnitude of surcharge pressure on the crest. The effects of these different parameters on acceleration response at different elevations of the retaining wall, horizontal soil pressures and face deformations are also presented. The results obtained from this study are helpful in understanding the relative performance of reinforced soil-retaining walls under different test conditions used in the experiments.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This paper develops a model for military conflicts where the defending forces have to determine an optimal partitioning of available resources to counter attacks from an adversary in two different fronts. The Lanchester attrition model is used to develop the dynamical equations governing the variation in force strength. Three different allocation schemes - Time-Zero-Allocation (TZA), Allocate-Assess-Reallocate (AAR), and Continuous Constant Allocation (CCA) - are considered and the optimal solutions are obtained in each case. Numerical examples are given to support the analytical results.
Resumo:
Field placements provide social work students with the opportunity to integrate their classroom learning with the knowledge and skills used in various human service programs. The supervision structure that has most commonly been used is the intensive one-to-one, clinical teaching model. However, this model is being challenged by significant changes in educational and industry sectors, which have led to an increased use of alternative fieldwork structures and supervision arrangements, including task supervision, group supervision, external supervision, and shared supervisory arrangements. This study focuses on identifying models of supervision and student satisfaction with their learning experiences and the supervision received on placement. The study analysed responses to a questionnaire administered to 263 undergraduate social work students enrolled in three different campuses in Australia after they had completed their first or final field placement. The study identified that just over half of the placements used the traditional one student to one social work supervisor model. A number of “emerging” models were also identified, where two or more social workers were involved in the professional supervision of the student. High levels of dissatisfaction were reported by those students who received external social work supervision. Results suggest that students are more satisfied across all aspects of the placement where there is a strong on-site social work presence.
Resumo:
Field placements provide social work students with the opportunity to integrate their classroom learning with the knowledge and skills used in various human service programs. The supervision structure that has most commonly been used is the intensive one-to-one, clinical teaching model. However, this model is being challenged by significant changes in educational and industry sectors, which have led to an increased use of alternative fieldwork structures and supervision arrangements, including task supervision, group supervision, external supervision, and shared supervisory arrangements. This study focuses on identifying models of supervision and student satisfaction with their learning experiences and the supervision received on placement. The study analysed responses to a questionnaire administered to 263 undergraduate social work students enrolled in three different campuses in Australia after they had completed their first or final field placement. The study identified that just over half of the placements used the traditional one student to one social work supervisor model. A number of “emerging” models were also identified, where two or more social workers were involved in the professional supervision of the student. High levels of dissatisfaction were reported by those students who received external social work supervision. Results suggest that students are more satisfied across all aspects of the placement where there is a strong on-site social work presence.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
We present the results of a search for Higgs bosons predicted in two-Higgs-doublet models, in the case where the Higgs bosons decay to tau lepton pairs, using 1.8 inverse fb of integrated luminosity of proton-antiproton collisions recorded by the CDF II experiment at the Fermilab Tevatron. Studying the observed mass distribution in events where one or both tau leptons decay leptonically, no evidence for a Higgs boson signal is observed. The result is used to infer exclusion limits in the two-dimensional parameter space of tan beta versus m(A).
Resumo:
We present the results of a search for Higgs bosons predicted in two-Higgs-doublet models, in the case where the Higgs bosons decay to tau lepton pairs, using 1.8 inverse fb of integrated luminosity of proton-antiproton collisions recorded by the CDF II experiment at the Fermilab Tevatron. Studying the observed mass distribution in events where one or both tau leptons decay leptonically, no evidence for a Higgs boson signal is observed. The result is used to infer exclusion limits in the two-dimensional parameter space of tan beta versus m(A).