973 resultados para Homography constraint
Resumo:
[EN]This paper deals with the so-called Person Case Constraint (Bonet, 1991), a universal constraint blocking accusative clitics and object agreement morphemes other than third person when a dative is inserted in the same clitic/agreement cluster. The aim of this paper is twofold. First, we argue that the scope of the PCC is considerably broader than assumed in previous work, and that neither its formulation in terms of person (1st/2nd vs. 3rd)-case (accusative vs. dative) restrictions nor its morphological nature are part of the right descriptive generalization.We present evidence (i) that the PCC is triggered by the presence of an animacy feature in the object’s agreement set; (ii) that it is not case dependent, also showing up in languages that lack dative case; and (iii) that it is not morphologically bound. Second, we argue that the PCC, even if it is modified accordingly, still puts together two different properties of the agreement system that should be set apart: (i) a cross linguistic sensitivity of object agreement to animacy and (ii) a similarly widespread restriction on multiple object agreement observed crosslinguistically. These properties lead us to propose a new generalization, the Object Agreement Constraint (OAC): if the verbal complex encodes object agreement, no other argument can be licensed through verbal agreement.
Resumo:
Maia Duguine, Susana Huidobro and Nerea Madariaga (eds.)
Resumo:
In May 2010, Brazil joined the roll of nations with a National Broadband Plan. The Decree nº 7,175/2010 had implemented a program that aimed to offer 30 million permanent broadband accesses until 2014 and established its main goals, such as accelerating economic and social development, promoting digital inclusion, reducing social and regional inequalities, promoting a generation of employment and income, and expanding electronic government services. However, the broadband access in Brazil is limited, expensive, and centralized in the main urban centres. Despite the fast growth in the past years due to mobile internet access, the market is still concentrated in the local incumbent operators that currently provide mobile services, landline services and Paid-TV services, resulting in a high level of market verticalization. The following dissertation investigates the constraint of broadband access development, the dynamics, the actors, and the factors that have delayed the roll-out of broadband services in Brazil. The study also promotes reflections about the challenge posed by the media, by costumers associations and by public opinion as critical observers of the policy making process. This research examines on the political influence towards regulation to determine the way policy will benefit interest groups. Many interviews have been conducted in order to understand the forces which have been acting in the telecommunications in Brazil after privatization, in 1998. This study aims to provide a better understanding of telecommunications regulatory process in Brazil, in order to help the country finding an adequate policy which can lead to the implementation of a broadband roll-out. The universal broadband access is the only way to benefit the whole society in Brazil with a satisfactory level of education and create more jobs and economic development regarding the plenty use of Information and Communications Technology (ICT).
Resumo:
Scalable video coding allows an efficient provision of video services at different quality levels with different energy demands. According to the specific type of service and network scenario, end users and/or operators may decide to choose among different energy versus quality combinations. In order to deal with the resulting trade-off, in this paper we analyze the number of video layers that are worth to be received taking into account the energy constraints. A single-objective optimization is proposed based on dynamically selecting the number of layers, which is able to minimize the energy consumption with the constraint of a minimal quality threshold to be reached. However, this approach cannot reflect the fact that the same increment of energy consumption may result in different increments of visual quality. Thus, a multiobjective optimization is proposed and a utility function is defined in order to weight the energy consumption and the visual quality criteria. Finally, since the optimization solving mechanism is computationally expensive to be implemented in mobile devices, a heuristic algorithm is proposed. This way, significant energy consumption reduction will be achieved while keeping reasonable quality levels.
Resumo:
The problem discussed is the stability of two input-output feedforward and feedback relations, under an integral-type constraint defining an admissible class of feedback controllers. Sufficiency-type conditions are given for the positive, bounded and of closed range feed-forward operator to be strictly positive and then boundedly invertible, with its existing inverse being also a strictly positive operator. The general formalism is first established and the linked to properties of some typical contractive and pseudocontractive mappings while some real-world applications and links of the above formalism to asymptotic hyperstability of dynamic systems are discussed later on.
Resumo:
The cod stock in the Western Baltic Sea is assessed to be overfished regarding the definitions of the UN World Summit on Sustainable Development at Johannesburg in 2002. Thus, the European Fisheries Council enforced a multi-annual management plan in 2007. Our medium term simulations over the future 10 years assume similar stock productivity as compared with the past four decades and indicate that the goals of the management plan can be achieved through TAC and consistent effort regulations. Taking account of the uncertainty in the recruitment patterns, the target average fishing mortality of age groups 3 – 6 years of F = 0.6 per year as defined in the management plan is indicated to exceed sustainable levels consistent with high long term yields and low risk of depletion. The stipulated constraint of the annual TAC variations of ±15% will dominate future fisheries management and implies a high recovery potential of the stock through continued reductions in fishing mortality. The scientific assessment of sustainable levels of exploitation and consideration in the plan is strongly advised, taking account of uncertainties attributed to environmental and biological effects. We recommend our study to be complemented with economic impact assessments including effects on by-catch species, which have been disregarded in this study. It is further demonstrated, that the goals of the management plan can alternatively be achieved by mesh size adaptations. An alternative technical option of mesh size increases to realize the required reductions in fishing mortality provides avoidance of discards of undersized fish after a few years by means of improved selectivity, another important element of the Common Fisheries Policy. However, it is emphasized that technical regulations since 1990 failed to affect the by-catch and discards of juvenile cod. In any way, the meaningful implementation of the multiannual management plan through stringent control and enforcement appears critical.
Resumo:
The hydrodynamics of a free flapping foil is studied numerically. The foil undergoes a forced vertical oscillation and is free to move horizontally. The effect of chord-thickness ratio is investigated by varying this parameter while fixing other ones such as the Reynolds number, the density ratio, and the flapping amplitude. Three different flow regimes have been identified when we increase the chord-thickness ratio, i.e., left-right symmetry, back-and-forth chaotic motion, and unidirectional motion with staggered vortex street. It is observed that the chord-thickness ratio can affect the symmetry-breaking bifurcation, the arrangement of vortices in the wake, and the terminal velocity of the foil. The similarity in the symmetry-breaking bifurcation of the present problem to that of a flapping body under constraint is discussed. A comparison between the dynamic behaviors of an elliptic foil and a rectangular foil at various chord-thickness ratios is also presented.
Resumo:
The study examines the integration of cultural, economic and environmental requirements for fish production in Borno State, Nigeria. A reconnaissance survey was conducted transferring some selected Local Government Areas. 60 questionnaires were administered in the six Local Governments representing Southern Borno State with Biu and Shani, central Borno with Konduga & Jere and Northern Borno with Gubia and Kukawa respectively. There is no cultural constraint to fish production but about 63% prefers to invest in other farming activities than in fish farming. 33% are not aware that fish can be cultured apart from getting it from the wild. 35% have the impression that fish farming ventures can be handled by government only. The economic earnings for fish production are high especially in some parts of Northern Borno, but the Local market potentials throughout the state are great. Nigeria has suitable soil for ponds apart from few locations at the central and Northern Borno that are made by sandy soil. Numerous perennial and seasonal rivers, streams, lakes, pools and flood plains adequate for fish culture especially in Southern Borno exist. The mean annual rainfall can result in some water storage in ponds. In areas where the annual precipitation is less than 550mm, exist few flow boreholes with potentials for fish production. The temperature regime may support growth and survival of fish even during the hottest months of the year (March, April and May). With the understanding and manipulation of these requirements, fish production in Nigeria can be greatly enhanced
Resumo:
A new high-order finite volume method based on local reconstruction is presented in this paper. The method, so-called the multi-moment constrained finite volume (MCV) method, uses the point values defined within single cell at equally spaced points as the model variables (or unknowns). The time evolution equations used to update the unknowns are derived from a set of constraint conditions imposed on multi kinds of moments, i.e. the cell-averaged value and the point-wise value of the state variable and its derivatives. The finite volume constraint on the cell-average guarantees the numerical conservativeness of the method. Most constraint conditions are imposed on the cell boundaries, where the numerical flux and its derivatives are solved as general Riemann problems. A multi-moment constrained Lagrange interpolation reconstruction for the demanded order of accuracy is constructed over single cell and converts the evolution equations of the moments to those of the unknowns. The presented method provides a general framework to construct efficient schemes of high orders. The basic formulations for hyperbolic conservation laws in 1- and 2D structured grids are detailed with the numerical results of widely used benchmark tests. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.
In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.
A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.
The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.
Resumo:
Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.
In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.
The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.
In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.
The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.
Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.
Resumo:
Tesis leida en la Universidad de Aberdeen. 178 p.
Resumo:
Limitation to an aqueous habitat is the most fundamental physiological constraint imposed upon fish, phrases such as 'like a fish of water', convey our acceptance of the general unsuitability of fish for terrestrial existence. The constraints that restrict fish to an aquatic habitat relate to respiration, acid-base regulation, nitrogenous excretion, water balance and ionic regulation. A fish not adapted for an amphibious lifestyle when removed from water, becomes hypoxic and hypercapnic and soon succumbs to respiratory acidosis because the problem of excretion of H super(+) and C0 sub(2) are more immediate than lack of oxygen. This happen because fish gills collapse in air, while the ventilator arrangements that moves an incompressible medium (water) oven them become ineffective
Resumo:
Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.
In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.
Resumo:
Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.