967 resultados para Optimal frame-level timing estimator
Resumo:
A fejlett ipari országoknak is az államadósság csökkentése vagy akár szinten tartása okozza az egyik legfontosabb gazdaságpolitikai dilemmát. Az euróövezet tagállamai esetében is ez a kritérium tűnik a legkevésbé teljesíthetőnek, de Japán és az Egyesült Államok is leküzdhetetlennek tűnő államadóssággal birkózik. A tanulmány rövid áttekintést ad néhány meghatározó közgazdasági megközelítésről, amelyek az államadósság szintjének hosszú távú alakulása mögött meghúzódó tényezőket, gazdaságpolitikai lépéseket magyarázzák. Végül az elméletek alapján tanulságokat fogalmaz meg a magyar államadósság kezelését illetően az 1990–2010 közötti folyamatok ismeretében. _____ The macroeconomic developments of the last decade have confirmed that one of the most important dilemmas that even developed economies have to face is the reduction or even sustaining of the state debt. In case of the eurozone member states this criterion is the most difficult to be accomplished, furthermore the United States and Japan are among the global powers that have to cope with state debts which seems to be insurmountable. The aim of this paper is to provide a brief overview of some decisive economic approaches (Barro [1979], Lucas and Stokey [1983], Marcet and Scott [2007], Martin [2009] etc.) that explain the factors behind the formation of long-run state debt level and economic policy measures accompanying state debt management. The paper also attempts to draw some lessons for the Hungarian state debt management in view of the 1990-2010 processes.
Resumo:
A tanulmány a mikroökonómia eszközrendszerét és a hazai gépjárműpiac 2013-as adatait segítségül hívva egy új módszert mutat be az ármeghatározás területén. A kutatás központi kérdése az, hogy hol található az a pont, amikor a fogyasztó elégedett a kínált minőséggel és árral – lehetőleg megfelelő időben – és a vállalat is elégedett a megszerzett profittal. A tanulmányban tehát az ármeghatározás során központi szerepet játszik a minőség és az idő, mint értékteremtő funkció. Az elemzés egyik legfőbb következtetése, hogy a profitmaximumból levezetett optimális ár a minőség és az idő különböző paraméterei mellett meghatározható. A módszer segítségével a vállalatok közgazdasági eszközrendszer segítségével kapnak egy új szemléletet működési paramétereik és egyben versenyprioritásaik (ár, költség, minőségszint, idő) felállításához. _____ The study points to a new method for determining price with the tools of microeconomics and data of the Hungarian car market. The focus of the research is on where to find the point where the consumer is satisfied with the quality and price offered – preferably right time – and the company is satisfied with the profit achieved. In this study, therefore, in setting prices plays a central role the quality and time as a value-added feature. One of the main conclusions of the analysis is that the optimal price can be determined by various parameters of the quality and time. The method of using the economic tools help companies get a new perspective and to set up their optimal operating parameters (price, cost, quality level, time).
Resumo:
Hazardous radioactive liquid waste is the legacy of more than 50 years of plutonium production associated with the United States' nuclear weapons program. It is estimated that more than 245,000 tons of nitrate wastes are stored at facilities such as the single-shell tanks (SST) at the Hanford Site in the state of Washington, and the Melton Valley storage tanks at Oak Ridge National Laboratory (ORNL) in Tennessee. In order to develop an innovative, new technology for the destruction and immobilization of nitrate-based radioactive liquid waste, the United State Department of Energy (DOE) initiated the research project which resulted in the technology known as the Nitrate to Ammonia and Ceramic (NAC) process. However, inasmuch as the nitrate anion is highly mobile and difficult to immobilize, especially in relatively porous cement-based grout which has been used to date as a method for the immobilization of liquid waste, it presents a major obstacle to environmental clean-up initiatives. Thus, in an effort to contribute to the existing body of knowledge and enhance the efficacy of the NAC process, this research involved the experimental measurement of the rheological and heat transfer behaviors of the NAC product slurry and the determination of the optimal operating parameters for the continuous NAC chemical reaction process. Test results indicate that the NAC product slurry exhibits a typical non-Newtonian flow behavior. Correlation equations for the slurry's rheological properties and heat transfer rate in a pipe flow have been developed; these should prove valuable in the design of a full-scale NAC processing plant. The 20-percent slurry exhibited a typical dilatant (shear thickening) behavior and was in the turbulent flow regime due to its lower viscosity. The 40-percent slurry exhibited a typical pseudoplastic (shear thinning) behavior and remained in the laminar flow regime throughout its experimental range. The reactions were found to be more efficient in the lower temperature range investigated. With respect to leachability, the experimental final NAC ceramic waste form is comparable to the final product of vitrification, the technology chosen by DOE to treat these wastes. As the NAC process has the potential of reducing the volume of nitrate-based radioactive liquid waste by as much as 70 percent, it not only promises to enhance environmental remediation efforts but also effect substantial cost savings. ^
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
The optimization of the timing parameters of traffic signals provides for efficient operation of traffic along a signalized transportation system. Optimization tools with macroscopic simulation models have been used to determine optimal timing plans. These plans have been, in some cases, evaluated and fine tuned using microscopic simulation tools. A number of studies show inconsistencies between optimization tool results based on macroscopic simulation and the results obtained from microscopic simulation. No attempts have been made to determine the reason behind these inconsistencies. This research investigates whether adjusting the parameters of macroscopic simulation models to correspond to the calibrated microscopic simulation model parameters can reduce said inconsistencies. The adjusted parameters include platoon dispersion model parameters, saturation flow rates, and cruise speeds. The results from this work show that adjusting cruise speeds and saturation flow rates can have significant impacts on improving the optimization/macroscopic simulation results as assessed by microscopic simulation models.
Resumo:
Calmette Bay within Marguerite Bay along the western side of the Antarctic Peninsula contains one of the most continuous flights of raised beaches described to date in Antarctica. Raised beaches extend to 40.8 m above sea level (masl) and are thought to reflect glacial isostatic adjustment due to the retreat of the Antarctic Peninsula Ice Sheet. Using optically stimulated luminescence (OSL), we dated quartz extracts from cobble surfaces buried in raised beaches at Calmette Bay. The beaches are separated into upper and lower beaches based on OSL ages, geomorphology, and sedimentary fabric. The two sets of beaches are separated by a prominent scarp. One of our OSL ages from the upper beaches dates to 9.3 thousand years ago (ka; as of 1950) consistent with previous extrapolation of sea-level data and the time of ice retreat from inner Marguerite Bay. However, four of the seven ages from the upper beaches date to the timing of glaciation. We interpret these ages to represent reworking of beaches deposited prior to the Last Glacial Maximum (LGM) by advancing and retreating LGM ice. Ages from the lower beaches record relative sea-level fall due to Holocene glacial-isostatic adjustment. We suggest a Holocene marine limit of 21.7 masl with an age of 5.5-7.3 ka based on OSL ages from Calmette Bay and other sea-level constraints in the area. A marine limit at 21.7 masl implies half as much relative sea-level change in Marguerite Bay during the Holocene as suggested by previous sea-level reconstructions. No evidence for a relative sea-level signature of neoglacial events, such as a decrease followed by an increase in RSL fall due to ice advance and retreat associated with the Little Ice Age, is found within Marguerite Bay indicating either: (1) no significant neoglacial advances occurred within Marguerite Bay; (2) rheological heterogeneity allows part of the Antarctic Peninsula (i.e. the South Shetland Islands) to respond to rapid ice mass changes while other regions are incapable of responding to short-lived ice advances; or (3) the magnitude of neoglacial events within Marguerite Bay is too small to resolve through relative sea-level reconstructions. Although the application of reconstructing sea-level histories using OSL-dated raised beach deposits provides a better understanding of the timing and nature of relative sea-level change in Marguerite Bay, we highlight possible problems associated with using raised beaches as sea-level indices due to post-depositional reworking by storm waves.
Resumo:
Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.
Resumo:
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Resumo:
The cortisol awakening response (CAR) is typically measured in the domestic setting. Moderate sample timing inaccuracy has been shown to result in erroneous CAR estimates and such inaccuracy has been shown partially to explain inconsistency in the CAR literature. The need for more reliable measurement of the CAR has recently been highlighted in expert consensus guidelines where it was pointed out that less than 6% of published studies provided electronic-monitoring of saliva sampling time in the post-awakening period. Analyses of a merged data-set of published studies from our laboratory are presented. To qualify for selection, both time of awakening and collection of the first sample must have been verified by electronic-monitoring and sampling commenced within 15 min of awakening. Participants (n = 128) were young (median age of 20 years) and healthy. Cortisol values were determined in the 45 min post-awakening period on 215 sampling days. On 127 days, delay between verified awakening and collection of the first sample was less than 3 min (‘no delay’ group); on 45 days there was a delay of 4–6 min (‘short delay’ group); on 43 days the delay was 7–15 min (‘moderate delay’ group). Cortisol values for verified sampling times accurately mapped on to the typical post-awakening cortisol growth curve, regardless of whether sampling deviated from desired protocol timings. This provides support for incorporating rather than excluding delayed data (up to 15 min) in CAR analyses. For this population the fitted cortisol growth curve equation predicted a mean cortisol awakening level of 6 nmols/l (±1 for 95% CI) and a mean CAR rise of 6 nmols/l (±2 for 95% CI). We also modelled the relationship between real delay and CAR magnitude, when the CAR is calculated erroneously by incorrectly assuming adherence to protocol time. Findings supported a curvilinear hypothesis in relation to effects of sample delay on the CAR. Short delays of 4–6 min between awakening and commencement of saliva sampling resulted an overestimated CAR. Moderate delays of 7–15 min were associated with an underestimated CAR. Findings emphasize the need to employ electronic-monitoring of sampling accuracy when measuring the CAR in the domestic setting.
Resumo:
A sufficiently complex set of molecules, if subject to perturbation, will self-organise and show emergent behaviour. If such a system can take on information it will become subject to natural selection. This could explain how self-replicating molecules evolved into life and how intelligence arose. A pivotal step in this evolutionary process was of course the emergence of the eukaryote and the advent of the mitochondrion, which both enhanced energy production per cell and increased the ability to process, store and utilise information. Recent research suggest that from its inception life embraced quantum effects such as “tunnelling” and “coherence” while competition and stressful conditions provided a constant driver for natural selection. We believe that the biphasic adaptive response to stress described by hormesis – a process that captures information to enable adaptability, is central to this whole process. Critically, hormesis could improve mitochondrial quantum efficiency, improving the ATP/ROS ratio, while inflammation, which is tightly associated with the aging process, might do the opposite. This all suggests that to achieve optimal health and healthy ageing, one has to sufficiently stress the system to ensure peak mitochondrial function, which itself could reflect selection of optimum efficiency at the quantum level.
Resumo:
Optimal assistance of an adult, adapted to the current level of understanding of the student (scaffolding), can help students with emotional and behavioural problems (EBD) to demonstrate a similar level of understanding on scientific tasks, compared to students from regular education (Van Der Steen, Steenbeek, Wielinski & Van Geert, 2012). In the present study the optimal scaffolding techniques for EBD students were investigated, as well as how these differ from scaffolding techniques used for regular students. A researcher visited five EBD students and five regular students (aged three to six years old) three times in a 1,5 years period. Student and researcher worked together on scientific tasks about gravity and air pressure, while the researcher asked questions. An adaptive protocol was used, so that all children were asked the same basic questions about the mechanisms of the task. Beside this, the researcher was also allowed to ask follow-up questions and use scaffolding methods when these seemed necessary. We found a bigger amount of scaffolding in the group of EBD students compared to the regular students. The scaffolding techniques that were used also differed between the two groups. For EBD students, we saw more scaffolding strategies focused on keeping the student committed to the task, and less strategies aimed at the relationship between the child and the researcher. Furthermore, in the group of regular students we saw a decreasing trend in the amount of scaffolding over the course of three visits. This trend was not visible for the EBD students. These results highlight the importance for using different scaffolding strategies when working with EBD students compared to regular students. Future research can give a clearer image of the differences in scaffolding needs between these two groups.
Resumo:
We consider a cooperative relaying network in which a source communicates with a group of users in the presence of one eavesdropper. We assume that there are no source-user links and the group of users receive only retransmitted signal from the relay. Whereas, the eavesdropper receives both the original and retransmitted signals. Under these assumptions, we exploit the user selection technique to enhance the secure performance. We first find the optimal power allocation strategy when the source has the full channel state information (CSI) of all links. We then evaluate the security level through: i) ergodic secrecy rate and ii) secrecy outage probability when having only the statistical knowledge of CSIs.
Resumo:
Certaines recherches ont investigué le traitement visuel de bas et de plus hauts niveaux chez des personnes neurotypiques et chez des personnes ayant un trouble du spectre de l’autisme (TSA). Cependant, l’interaction développementale entre chacun de ces niveaux du traitement visuel n’est toujours pas bien comprise. La présente thèse a donc deux objectifs principaux. Le premier objectif (Étude 1) est d’évaluer l’interaction développementale entre l’analyse visuelle de bas niveaux et de niveaux intermédiaires à travers différentes périodes développementales (âge scolaire, adolescence et âge adulte). Le second objectif (Étude 2) est d’évaluer la relation fonctionnelle entre le traitement visuel de bas niveaux et de niveaux intermédiaires chez des adolescents et des adultes ayant un TSA. Ces deux objectifs ont été évalué en utilisant les mêmes stimuli et procédures. Plus précisément, la sensibilité de formes circulaires complexes (Formes de Fréquences Radiales ou FFR), définies par de la luminance ou par de la texture, a été mesurée avec une procédure à choix forcés à deux alternatives. Les résultats de la première étude ont illustré que l’information locale des FFR sous-jacents aux processus visuels de niveaux intermédiaires, affecte différemment la sensibilité à travers des périodes développementales distinctes. Plus précisément, lorsque le contour est défini par de la luminance, la performance des enfants est plus faible comparativement à celle des adolescents et des adultes pour les FFR sollicitant la perception globale. Lorsque les FFR sont définies par la texture, la sensibilité des enfants est plus faible comparativement à celle des adolescents et des adultes pour les conditions locales et globales. Par conséquent, le type d’information locale, qui définit les éléments locaux de la forme globale, influence la période à laquelle la sensibilité visuelle atteint un niveau développemental similaire à celle identifiée chez les adultes. Il est possible qu’une faible intégration visuelle entre les mécanismes de bas et de niveaux intermédiaires explique la sensibilité réduite des FFR chez les enfants. Ceci peut être attribué à des connexions descendantes et horizontales immatures ainsi qu’au sous-développement de certaines aires cérébrales du système visuel. Les résultats de la deuxième étude ont démontré que la sensibilité visuelle en autisme est influencée par la manipulation de l’information locale. Plus précisément, en présence de luminance, la sensibilité est seulement affectée pour les conditions sollicitant un traitement local chez les personnes avec un TSA. Cependant, en présence de texture, la sensibilité est réduite pour le traitement visuel global et local. Ces résultats suggèrent que la perception de formes en autisme est reliée à l’efficacité à laquelle les éléments locaux (luminance versus texture) sont traités. Les connexions latérales et ascendantes / descendantes des aires visuelles primaires sont possiblement tributaires d’un déséquilibre entre les signaux excitateurs et inhibiteurs, influençant ainsi l’efficacité à laquelle l’information visuelle de luminance et de texture est traitée en autisme. Ces résultats supportent l’hypothèse selon laquelle les altérations de la perception visuelle de bas niveaux (local) sont à l’origine des atypies de plus hauts niveaux chez les personnes avec un TSA.
Resumo:
Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.