921 resultados para inter-bank interest rate
Resumo:
This study empirically analyzes the sources of the exchange rate fluctuations in India by employing the structural VAR model. The VAR system consists of three variables, i.e., the nominal exchange rate, the real exchange rate, and the relative output of India and a foreign country. Consistent with most previous studies, the empirical evidence demonstrates that real shocks are the main drives of the fluctuations in real and nominal exchange rates, indicating that the central bank cannot maintain the real exchange rate at its desired level over time.
Resumo:
In the five-year period from 2006 to 2011, the real exchange rate of the Myanmar kyat appreciated 200 percent, signifying that the value of the US dollar in Myanmar diminished to one third of its previous level. While the resource boom is suspected as a source of the real exchange rate appreciation, its aggravation is related to administrative controls on foreign exchange and imports. First, foreign exchange controls prevented replacement of the negotiated transactions of foreign exchange with the bank intermediation. This hampered government interventions in the market. Second, import controls repressed imports, aggravating excess supply of foreign exchange. Relaxation of administrative controls is necessary for moderating currency appreciation.
Resumo:
In the post-Asian crisis period, bank loans to the manufacturing sector have shown a slow recovery in the affected countries, unexceptionally in the Philippines. This paper provides a literacy survey on the effectiveness of the Central Bank’s monetary policy and the responsiveness of the financial market, and discusses on the future works necessary to better understand the monetary policy effectiveness in the Philippines. As the survey shows, most previous works focus on the correlation between the short-term policy rates and during the period of monetary tightening and relatively less interest in quantitative effectiveness. Future tasks would shed lights on (1) the asset side – other than loan outstanding – of banks to analyze their behavior/preference in structuring portfolios, and (2) the quantitative impacts during the monetary easing period.
Resumo:
By analyzing a comprehensive dataset on transport transactions in Japan, we describe a directional imbalance in freight rates by transport mode and examine its potential sources, such as economies of density and directionally imbalanced transport flow. There are certain numbers of observed links which show asymmetric transport costs. Instrumental variable analysis is used to show that economies of density account for deviation from symmetric freight rates between prefectures. Our results show that a 10% increase in outbound transport flow relative to inbound transport flow leads to a 2.1% decrease in outbound freight rate relative to inbound freight rate.
Resumo:
Since the abolition of the official peg and the introduction of a managed float in April 2012, the Central Bank of Myanmar has operated the daily two–way auctions of foreign exchange aimed at smoothing exchange rate fluctuations. Despite the reforms to the foreign exchange regime, however, informal trading of foreign exchange remains pervasive. Using the daily informal exchange rate and Central Bank auction data, this study examines the impacts of auctions on the informal market rate. First, a VAR analysis indicates that the official rate did not Granger cause the informal rate. Second, GARCH models indicate that the auctions did not reduce the conditional variance of the informal rate returns. Overall, the auctions have only a quite modest impact on the informal exchange rate.
Resumo:
Studies on the rise of global value chains (GVCs) have attracted a great deal of interest in the recent economics literature. However, due to statistical and methodological challenges, most existing research ignores domestic regional heterogeneity in assessing the impact of joining GVCs. GVCs are supported not only directly by domestic regions that export goods and services to the world market, but also indirectly by other domestic regions that provide parts, components, and intermediate services to final exporting regions. To better understand the nature of a country's position and degree of participation in GVCs, we need to fully examine the role of individual domestic regions. Understanding the domestic components of GVCs is especially important for larger economies such as China, the US, India and Japan, where there may be large variations in economic scale, geography of manufacturing, and development stages at the domestic regional level. This paper proposes a new framework for measuring domestic linkages to global value chains. This framework measures domestic linkages by endogenously embedding a target country's (e.g. China and Japan) domestic interregional input–output tables into the OECD inter-country input–output model. Using this framework, we can more clearly understand how global production is fragmented and extended internationally and domestically.
Resumo:
In this genre analysis research paper, we compare U.S. patents, contracts, and regulations on technical matters with a focus upon the relation between vagueness and communicative purposes and subpurposes of these three genres. Our main interest is the investigation of intergeneric conventions across the three genres, based on the software analysis of three corpora (one for each genre, 1 million words per corpus). The result of the investigation is that intergeneric conventions are found at the level of types of expressed linguistic vagueness, but that intergeneric conventions at the level of actual formulations are rare. The conclusion is that at this latter level the influence from the situation type underlying the individual genre is more important than the overarching legal character of the genres, when we talk about introducing explicit vagueness in the text.
Resumo:
Hoy en día asistimos a un creciente interés por parte de la sociedad hacia el cuidado de la salud. Esta afirmación viene apoyada por dos realidades. Por una parte, el aumento de las prácticas saludables (actividad deportiva, cuidado de la alimentación, etc.). De igual manera, el auge de los dispositivos inteligentes (relojes, móviles o pulseras) capaces de medir distintos parámetros físicos como el pulso cardíaco, el ritmo respiratorio, la distancia recorrida, las calorías consumidas, etc. Combinando ambos factores (interés por el estado de salud y disponibilidad comercial de dispositivos inteligentes) están surgiendo multitud de aplicaciones capaces no solo de controlar el estado actual de salud, también de recomendar al usuario cambios de hábitos que lleven hacia una mejora en su condición física. En este contexto, los llamados dispositivos llevables (weareables) unidos al paradigma de Internet de las cosas (IoT, del inglés Internet of Things) permiten la aparición de nuevos nichos de mercado para aplicaciones que no solo se centran en la mejora de la condición física, ya que van más allá proponiendo soluciones para el cuidado de pacientes enfermos, la vigilancia de niños o ancianos, la defensa y la seguridad, la monitorización de agentes de riesgo (como bomberos o policías) y un largo etcétera de aplicaciones por llegar. El paradigma de IoT se puede desarrollar basándose en las existentes redes de sensores inalámbricos (WSN, del inglés Wireless Sensor Network). La conexión de los ya mencionados dispositivos llevables a estas redes puede facilitar la transición de nuevos usuarios hacia aplicaciones IoT. Pero uno de los problemas intrínsecos a estas redes es su heterogeneidad. En efecto, existen multitud de sistemas operativos, protocolos de comunicación, plataformas de desarrollo, soluciones propietarias, etc. El principal objetivo de esta tesis es realizar aportaciones significativas para solucionar no solo el problema de la heterogeneidad, sino también de dotar de mecanismos de seguridad suficientes para salvaguardad la integridad de los datos intercambiados en este tipo de aplicaciones. Algo de suma importancia ya que los datos médicos y biométricos de los usuarios están protegidos por leyes nacionales y comunitarias. Para lograr dichos objetivos, se comenzó con la realización de un completo estudio del estado del arte en tecnologías relacionadas con el marco de investigación (plataformas y estándares para WSNs e IoT, plataformas de implementación distribuidas, dispositivos llevables y sistemas operativos y lenguajes de programación). Este estudio sirvió para tomar decisiones de diseño fundamentadas en las tres contribuciones principales de esta tesis: un bus de servicios para dispositivos llevables (WDSB, Wearable Device Service Bus) basado en tecnologías ya existentes tales como ESB, WWBAN, WSN e IoT); un protocolo de comunicaciones inter-dominio para dispositivos llevables (WIDP, Wearable Inter-Domain communication Protocol) que integra en una misma solución protocolos capaces de ser implementados en dispositivos de bajas capacidades (como lo son los dispositivos llevables y los que forman parte de WSNs); y finalmente, la tercera contribución relevante es una propuesta de seguridad para WSN basada en la aplicación de dominios de confianza. Aunque las contribuciones aquí recogidas son de aplicación genérica, para su validación se utilizó un escenario concreto de aplicación: una solución para control de parámetros físicos en entornos deportivos, desarrollada dentro del proyecto europeo de investigación “LifeWear”. En este escenario se desplegaron todos los elementos necesarios para validar las contribuciones principales de esta tesis y, además, se realizó una aplicación para dispositivos móviles por parte de uno de los socios del proyecto (lo que contribuyó con una validación externa de la solución). En este escenario se usaron dispositivos llevables tales como un reloj inteligente, un teléfono móvil con sistema operativo Android y un medidor del ritmo cardíaco inalámbrico capaz de obtener distintos parámetros fisiológicos del deportista. Sobre este escenario se realizaron diversas pruebas de validación mediante las cuales se obtuvieron resultados satisfactorios. ABSTRACT Nowadays, society is shifting towards a growing interest and concern on health care. This phenomenon can be acknowledged by two facts: first, the increasing number of people practising some kind of healthy activity (sports, balanced diet, etc.). Secondly, the growing number of commercial wearable smart devices (smartwatches or bands) able to measure physiological parameters such as heart rate, breathing rate, distance or consumed calories. A large number of applications combining both facts are appearing. These applications are not only able to monitor the health status of the user, but also to provide recommendations about routines in order to improve the mentioned health status. In this context, wearable devices merged with the Internet of Things (IoT) paradigm enable the proliferation of new market segments for these health wearablebased applications. Furthermore, these applications can provide solutions for the elderly or baby care, in-hospital or in-home patient monitoring, security and defence fields or an unforeseen number of future applications. The introduced IoT paradigm can be developed with the usage of existing Wireless Sensor Networks (WSNs) by connecting the novel wearable devices to them. In this way, the migration of new users and actors to the IoT environment will be eased. However, a major issue appears in this environment: heterogeneity. In fact, there is a large number of operating systems, hardware platforms, communication and application protocols or programming languages, each of them with unique features. The main objective of this thesis is defining and implementing a solution for the intelligent service management in wearable and ubiquitous devices so as to solve the heterogeneity issues that are presented when dealing with interoperability and interconnectivity of devices and software of different nature. Additionally, a security schema based on trust domains is proposed as a solution to the privacy problems arising when private data (e.g., biomedical parameters or user identification) is broadcasted in a wireless network. The proposal has been made after a comprehensive state-of-the-art analysis, and includes the design of a Wearable Device Service Bus (WDSB) including the technologies collected in the requirement analysis (ESB, WWBAN, WSN and IoT). Applications are able to access the WSN services regardless of the platform and operating system where they are running. Besides, this proposal also includes the design of a Wearable Inter-Domain communication Protocols set (WIDP) which integrates lightweight protocols suitable to be used in low-capacities devices (REST, JSON, AMQP, CoAP, etc...). Furthermore, a security solution for service management based on a trustworthy domains model to deploy security services in WSNs has been designed. Although the proposal is a generic framework for applications based on services provided by wearable devices, an application scenario for testing purposes has been included. In this validation scenario it has been presented an autonomous physical condition performance system, based on a WSN, bringing the possibility to include several elements in an IoT scenario: a smartwatch, a physiological monitoring device and a smartphone. In summary, the general objective of this thesis is solving the heterogeneity and security challenges arising when developing applications for WSNs and wearable devices. As it has been presented in the thesis, the solution proposed has been successfully validated in a real scenario and the obtained results were satisfactory.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
As diversas aplicações tecnológicas de nanopartículas magnéticas (NPM) vêm intensificando o interesse por materiais com propriedades magnéticas diferenciadas, como magnetização de saturação (MS) intensificada e comportamento superparamagnético. Embora MNP metálicas de Fe, Co e bimetálicas de FeCo e FePt possuam altos valores de MS, sua baixa estabilidade química dificulta aplicações em escala nanométrica. Neste trabalho foram sintetizadas NPM de Fe, Co, FeCo e FePt com alta estabilidade química e rigoroso controle morfológico. NPM de óxido metálicos (Fe e Co) também foram obtidas. Dois métodos de síntese foram empregados. Usando método baseado em sistemas nanoheterogêneos (sistemas micelares ou de microemulsão inversa), foram sintetizadas NPM de Fe3O4 e Co metálico. Foram empregados surfactantes cátion-substituídos: dodecil sulfato de ferro(III) (FeDS) e dodecil sulfato de cobalto(II) (CoDS). Para a síntese das NPM, foram estudados e determinados a concentração micelar crítica do FeDS em 1-octanol (cmc = 0,90 mmol L-1) e o diagrama de fases pseudoternário para o sistema n-heptano/CoDS/n-butanol/H2O. NPM esferoidais de magnetita com3,4 nm de diâmetro e comportamento quase-paramagnético foram obtidas usando sistemas micelares de FeDS em 1-octanol. Já as NPM de Co obtidas via microemulsão inversa, apesar da larga distribuição de tamanho e baixa MS, são quimicamente estáveis e superparamagnéticas. O segundo método é baseado na decomposição térmica de complexos metálicos, pelo qual foram preparadas NPM esféricas de FePt e de óxidos metálicos (Fe3O4, FeXO1-X, (Co,Fe)XO1-X e CoFe2O4) com morfologia controlada e estabilidade química. O método não mostrou a mesma efetividade na síntese de NPM de FeAg e FeCo: a liga FeAg não foi obtida enquanto que NPM de FeCo com estabilidade química foram obtidas sem controle morfológico. NPM de Fe e FeCo foram preparadas a partir da redução térmica de NPM de Fe3O4 e CoFe2O4, as quais foram previamente recobertas com sílica. A sílica previne a sinterização inter-partículas, além de proporcionar caráter hidrofílico e biocompatibilidade ao material. As amostras reduzidas apresentaram aumento dos valores de MS (entre 21,3 e 163,9%), o qual é diretamente proporcional às dimensões das NPM. O recobrimento com sílica foi realizado via hidrólise de tetraetilortosilicato (TEOS) em sistema de microemulsão inversa. A espessura da camada de sílica foi controlada variando-se o tempo de reação e as concentrações de TEOS e de NPM, sendo então proposto um mecanismo do processo de recobrimento. Algumas amostras receberam um recobrimento adicional de TiO2 na fase anatase, para o qual foi empregado etilenoglicol como solvente e ligante para formação de glicolato de Ti como precursor. A espessura da camada de TiO2 (2-12 nm) é controlada variando as quantidades relativas entre NPM e o precursor de Ti. Ensaios de hipertermia magnética foram realizados para as amostras recobertas com sílica. Ensaios de hipertermia magnéticas mostram grande aumento da taxa de aquecimento das amostras após a redução térmica, mesmo para dispersões diluídas de NPM (0,6 a 4,5 mg mL-1). Taxas de aquecimento entre 0,3 e 3,0oC min-1 e SAR entre 37,2 e 96,3 W g-1. foram obtidos. A atividade fotocatalítica das amostras recobertas foram próximas à da fase anatase pura, com a vantagem de possuir um núcleo magnético que permite a recuperação do catalisador pela simples aplicação de campos magnéticos externos. Os resultados preliminares dos ensaios de hipertermia magnética e fotocatálise indicam um forte potencial dos materiais aqui relatados para aplicações em biomedicina e em fotocatálise.
Resumo:
Objetivos: estabelecer amostras de referência constituídas por gravações julgadas com consenso como representativas da presença ou ausência da oclusiva glotal (OG) e comparar julgamentos perceptivo-auditivos da presença e ausência da OG com e sem o uso de amostras de referência. Metodologia: o estudo foi dividido em duas etapas. Durante a ETAPA 1, 480 frases referentes aos sons oclusivos e fricativos produzidas por falantes com história de fissura labiopalatina foram julgadas por três fonoaudiólogas experientes quanto à identificação da OG. As frases foram julgadas individualmente e aquelas que não apresentaram consenso inicial foram julgadas novamente de maneira simultânea. As amostras julgadas com consenso com relação à presença ou ausência da OG durante produção das seis consoantes-alvo oclusivas e seis fricativas foram selecionadas para estabelecer um Banco de Amostras Representativas da OG. A ETAPA 2 consistiu na seleção de 48 amostras de referência referentes aos 12 sons de interesse e 120 amostras experimentais e, o julgamento dessas amostras experimentais por três grupos de juízes, cada grupo com três juízes com experiências distintas com relação ao julgamento de fala na fissura de palato. Os juízes julgaram as amostras experimentais duas vezes, primeiro sem acesso às referências e, após uma semana, com acesso às referências. Resultados: os julgamentos realizados na ETAPA 1 evidenciaram consenso com relação a OG em 352 amostras, sendo 120 frases com produção adequada para os sons de interesse e 232 representativas do uso da OG. Essas 352 amostras constituíram o Banco de amostras Representativas da OG. Os resultados da ETAPA 2 indicaram que ao comparar a média do valor de Kappa obtida para os 12 sons de interesse em cada um dos grupos nos julgamentos sem e com acesso às amostras de referência a concordância para o grupo 1 (G1) passou de regular (K=0,35) para moderada (K=0,55), para o grupo 2 (G2) passou de moderada (K=0,44) para substancial (K=0,76) e para o grupo 3 (G3) passou de substancial (K=0,72) para quase perfeita (K=0,83). Observou-se que as melhores concordâncias ocorreram para o grupo dos fonoaudiólogos experientes (G3), seguido dos fonoaudiólogos recém-formados (G2), com as piores observadas para o grupo de alunos de graduação (G1). Conclusão: um Banco de Amostras de Referência Representativas da OG foi estabelecido e os julgamentos perceptivo-auditivos de juízes com uso das amostras de referência foram obtidos com concordância inter-juízes e porcentagem de acertos melhor do que os julgamentos sem acesso às referências. Os resultados sugerem a importância do uso de amostras de referência para minimizar a subjetividade da avaliação perceptivo auditiva da fala.
Resumo:
During the crisis the European Central Bank’s roles have been greatly extended beyond its price stability mandate. In addition to the primary objective of price stability and the secondary objective of supporting EU economic policies, we identify ten new tasks related to monetary policy and financial stability. We argue that there are three main constraints on monetary policy: fiscal dominance, financial repercussions and regional divergences. By assessing the ECB’s tasks in light of these constraints, we highlight a number of synergies between these tasks and the ECB’s primary mandate of price stability. But we highlight major conflicts of interest related to the ECB’s participation in financial assistance programmes. We also underline that the ECB’s government bond purchasing programmes have introduced the concept of ‘monetary policy under conditionality’, which involves major dilemmas. A solution would be a major change towards a US-style system, in which state public debts are small, there are no federal bail-outs for states, the central bank does not purchase state debt and banks do not hold state debt. Such a change is unrealistic in the foreseeable future.
Resumo:
"August 1995."
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.