945 resultados para Point of interest
Resumo:
The text analyses the intelligence activity against Poland in the period 1944-1989. The paper also contains a case study, i.e. an analysis of the American intelligence service activity held against Poland. While examining the research thesis, the author used the documents and analyses prepared by the Ministry of Internal Affairs. In order to best illustrate the point, the author presented a number of cases of persons who spied for the USA, which was possible thanks to the analysis of the training materials of the Ministry of Internal Affairs directed to the officers of the Security Service and the Citizens’ Militia. The text tackles the following issues: (1) to what extent did the character of the socio-political system influence the number of persons convicted for espionage against Poland in the period under examination?, (2) what was the level of interest of the foreign intelligence services in Poland before the year 1990?, (3) is it possible to indicate the specificity of the U.S. intelligence activity against Poland? 1) The analysis of data indicates that the period 1946-1956 witnessed a great number of convictions for espionage, which is often associated with the peculiar political situation in Poland of that time. Up to 1953, the countries of the Eastern bloc had reproduced the Stalin’s system, which only ceased due to the death of Stalin himself. Since then, the communist systems gradually transformed into the system of nomenklatura. Irrespective of these changes, Poland still witnessed a wave of repressions, which resulted from the threats continuously looming over the communist authorities – combating the anti-communist underground movement, fighting with the Ukrainian Insurgent Army, the Polish government-in-exile, possible revisionism of borders, social discontent related to the socio-political reforms. Hence, a great number of convictions for espionage at that time could be ascribed to purely political sentences. Moreover, equally significant was the fact that the then judicial practice was preoccupied assessing negatively any contacts and relations with foreigners. This excessive number of convictions could ensue from other criminal-law provisions, which applied with respect to the crimes against the State, including espionage. What is also important is the fact that in the Stalin’s period the judiciary personnel acquired their skills and qualifications through intensive courses in law with the predominant spirit of the theory of evidence and law by Andrey Vyshinsky. Additionally, by the decree of 1944 the Penal Code of the Polish Armed Forces was introduced; the code envisaged the increase in the number of offences classified as penalised with death penalty, whereas the high treason was subject to the military jurisdiction (the civilians were prosecuted in military courts till 1955; the espionage, however, still stood under the military jurisdiction). In 1946, there was introduced the Decree on particularly dangerous crimes in the period of the State’s recovery, which was later called a Small Penal Code. 2) The interest that foreign intelligence services expressed in relation to Poland was similar to the one they had in all countries of Eastern and Central Europe. In the case of Poland, it should be noted that foreign intelligence services recruited Polish citizens who had previously stayed abroad and after WWII returned to their home country. The services also gathered information from Poles staying in immigrant camps (e.g. in FRG). The activity of the American intelligence service on the territory of FRG and West Berlin played a key role. The documents of the Ministry of Internal Affairs pointed to the global range of this activity, e.g. through the recruitment of Polish sailors in the ports of the Netherlands, Japan, etc. In line with the development in the 1970s, espionage, which had so far concentrated on the defence and strategic sectors, became focused on science and technology of the People’s Republic of Poland. The acquisition of collaborators in academic circles was much easier, as PRL opened to academic exchange. Due to the system of visas, the process of candidate selection for intelligence services (e.g. the American) began in embassies. In the 1980s, the activity of the foreign intelligence services concentrated on the specific political situation in Poland, i.e. the growing significance of the “Solidarity” social movement. 3) The specificity of the American intelligence activity against Poland was related to the composition of the residency staff, which was the largest in comparison to other Western countries. The wide range of these activities can be proved by the quantitative data of convictions for espionage in the years 1944-1984 (however, one has to bear in mind the factors mentioned earlier in the text, which led to the misinterpretation of these data). Analysing the data and the documents prepared by the Ministry of Internal Affairs, one should treat them with caution, as, frequently, the Polish counter-intelligence service used to classify the ordinary diplomatic practice and any contacts with foreigners as espionage threats. It is clearly visible in the language of the training materials concerned with “secret service methods of the intelligence activity” as well as in the documents on operational activities of the Security Service in relation to foreigners. The level of interest the USA had in Poland was mirrored in the classification of diplomatic posts, according to which Warsaw occupied the second place (the so-called Group “B”) on the three-point scale. The CIA experienced spectacular defeats during their activity in Poland: supporting the Polish underground anti-communist organisation Freedom and Independence and the so-called Munich-Berg episode (both cases took place in the 1950s). The text focuses only on selected issues related to the espionage activities against Poland. Similarly, the analysis of the problem has been based on selected sources, which has limited the research scope - however, it was not the aim of the author to present the espionage activity against Poland in a comprehensive way. In order to assess the real threat posed by the espionage activity, one should analyse the case of persons convicted for espionage in the period 1944-1989, as the available quantitative data, mentioned in the text, cannot constitute an explicit benchmark for the scale of espionage activity. The inaccuracies in the interpretation of data and variables, which can affect the evaluation of this phenomenon, have been pointed out in the text.
Resumo:
This paper studies monetary policy transmission using several statistical tools -- We find that the relationships between the policy interest rate and the financial system’s interest rates are positive and statistically significant, and transmission is complete eight months after policy shocks occur -- The speed of transmission varies according to the type of interest rates -- Transmission is faster for interest rates on loans provided to households, and is particularly rapid and complete for rates on preferential commercial loans -- Transmission is slower for credit card and mortgage rates, due to regulatory issues (interest rate ceilings)
Resumo:
Part 8: Business Strategies Alignment
Resumo:
Tese de Doutoramento, Turismo, Faculdade de Economia, Universidade do Algarve, 2016
Resumo:
Interest rate sensitivity assessment framework based on fixed income yield indexes is developed and applied to two types of emerging market corporate debt: investment grade and high yield exposures. Our research advances beyond the correlation analyses focused on co- movements in yields and/or spreads of risky and risk-free assets. We show that correlation- based analyses of interest rate sensitivity could appear rather inconclusive and, hence, we investigate the bottom line profit and loss of a hypothetical model portfolio of corporates. We consider historical data covering the period 2002 – 2015, which enable us to assess interest rate sensitivity of assets during the development, the apogee, and the aftermath of the global financial crisis. Based on empirical evidence, both for investment and speculative grades securities, we find that the emerging market corporates exhibit two different regimes of sensitivity to interest rate changes. We observe switching from a positive sensitivity under the normal market conditions to a negative one during distressed phases of business cycles. This research sheds light on how financial institutions may approach interest rate risk management, evidencing that even plain vanilla portfolios of emerging market corporates, which on average could appear rather insensitive to the interest rate risk in fact present a binary behavior of their interest rate sensitivities. Our findings allow banks and financial institutions for optimizing economic capital under Basel III regulatory capital rules.
Resumo:
Dry fermented sausages are highly appreciated food specialties, mainly in Portugal and other southern European countries. Therefore, all research efforts aiming at improving the food quality and safety of traditional dry sausages are of interest, since they are likely to result in products with higher added value and quality standards most suited to the requirements and concerns of the modern consumers. Among those efforts, it may be highlighted the studies involving innovative processing parameters and technologies to overcome practical problems gathered in the meat industry, which are mostly associated with food quality and safety. Additionally, characterization of traditional dry sausages and rationalization of their processing are essential for further achievement of any official certification. Thus, this article attempts to point out some research lines of highest interest in meat science (and particularly to the broad variety of regional dry fermented sausages), towards to the valorisation of technological, nutritional and commercial features. In addition, it is here emphasized the importance for the continuous improvement of the quality and safety of meat products as a way to respond to the current concerns regarding its consumption and the general advices in reducing its daily intake.
Resumo:
The present work was done on two ambrotypes and two tintypes. It aimed evaluate their chemical and physical characteristics, especially their degradation patterns. Moreover, to understand the materials used for their production and cross-check analytical and historical information about the production processes. To do so multi-analytical, non-destructive methods were applied. Technical photography highlighted the surface morphology of the objects and showed the distribution of the protective coatings on their surfaces through UV radiation, which were very different between the four pieces. OM allowed for a detailed observation of the surfaces along with the selection of areas of interest to be analysed with SEM-EDS. SEM-EDS was the technique used most extensively and the one that provided the most insightful results: it allowed to observe the morphology of the image forming particles and the differences between highlights, dark areas and the interfaces between them. Also, elemental point analysis and elemental maps were used to identify the image forming particles as silver and to detect the presence of compounds related to the production, particularly gold used to highlight jewellery, iron as the red pigment and traces of the compounds used in the photographic process containing Ag, I, Na and S . Also, some degradation compounds were analysed containing Ag, Cu, S and Cl. With μ-FT-IR the presence of collodion was confirmed and the source of the protective varnishes was identified, particularly mastic and shellac, in either mixtures of the two or only one. μ-Raman detected the presence of metallic silver and silver chloride on the objects and identified one of the red pigments as Mars red. Finally, μ-XRD showed the presence of metallic silver and silver iodide on both ambrotypes and tintypes and hematite, magnetite and wuestite on the tintypes; RESUMO: O presente estudo foi desenvolvido sobre dois ambrótipos e dois ferrótipos. O propósito consiste em estudar as suas características químicas e físicas, dando particular ênfase aos padrões de degradação. Também é pretendido compreender os materiais usados na sua produção e relacionar esta informação analítca com dados históricos de manuais técnicos contemporâneos à produção dos objectos. Para tal foram utilizadas técnicas multi-analíticas e não destrutivas. O uso da fotografia técnica permitiu uma observação da morfologia das superficies dos objectos e da distribuição das camadas de verniz através da radiação UV, muito diferente entre os quatro. A microscopia óptica proporcionou uma observação detalhada das superfícies assim como a selecção de pontos de interesse para serem analisados com SEM-EDS. SEM-EDS foi a técnica usada mais extensivamente e a que proporcionou os resultados mais detalhados: observação da morofologia das partículas formadoras da imagem e as diferenças entre zonas de altas luzes, baixas luzes e as interfaces entre elas. A análise elemental e os mapas elementares foram usados para detectar prata nas partículas formadoras da imagem e a presença de compostos relacionados com a produção, em particular ouro utilizado para realçar joalharia, ferro no pigmento vermelho e vestígios de compostos utilizados no processo fotográfico incluindo Ag, I, Na e S. Do mesmo modo, alguns compostos de degradação foram analisados contendo Ag, Cu, S e Cl. Com μ-FT-IR a presença de colódio foi confirmada e identificada a origem dos vernizes, mástique e goma-laca, tanto em misturas dos dois como apenas um. Com μ-Raman foi detectada a presença de prata metálica e de cloreto de prata e identificado um dos pigmentos vermelhos como Mars red. Finalmente, μ-DRX revelou a presença de prata metálica e iodeto de prata tanto nos ambrótipos como nos ferrótipos e hematite, magnetite e wuestite nos ferrótipos.
Resumo:
In the most recent years, Additive Manufacturing (AM) has drawn the attention of both academic research and industry, as it might deeply change and improve several industrial sectors. From the material point of view, AM results in a peculiar microstructure that strictly depends on the conditions of the additive process and directly affects mechanical properties. The present PhD research project aimed at investigating the process-microstructure-properties relationship of additively manufactured metal components. Two technologies belonging to the AM family were considered: Laser-based Powder Bed Fusion (LPBF) and Wire-and-Arc Additive Manufacturing (WAAM). The experimental activity was carried out on different metals of industrial interest: a CoCrMo biomedical alloy and an AlSi7Mg0.6 alloy processed by LPBF, an AlMg4.5Mn alloy and an AISI 304L austenitic stainless steel processed by WAAM. In case of LPBF, great attention was paid to the influence that feedstock material and process parameters exert on hardness, morphological and microstructural features of the produced samples. The analyses, targeted at minimizing microstructural defects, lead to process optimization. For heat-treatable LPBF alloys, innovative post-process heat treatments, tailored on the peculiar hierarchical microstructure induced by LPBF, were developed and deeply investigated. Main mechanical properties of as-built and heat-treated alloys were assessed and they were well-correlated to the specific LPBF microstructure. Results showed that, if properly optimized, samples exhibit a good trade-off between strength and ductility yet in the as-built condition. However, tailored heat treatments succeeded in improving the overall performance of the LPBF alloys. Characterization of WAAM alloys, instead, evidenced the microstructural and mechanical anisotropy typical of AM metals. Experiments revealed also an outstanding anisotropy in the elastic modulus of the austenitic stainless-steel that, along with other mechanical properties, was explained on the basis of microstructural analyses.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
The study of tides and their interactions with the complex dynamics of the global ocean represents a crucial challenge in ocean modelling. This thesis aims to deepen this study from a dynamical point of view, analysing what are the tidal effects on the general circulation of the ocean. We perform different experiments of a mesoscale-permitting global ocean model forced by both atmospheric fields and astronomical tidal potential, and we implement two parametrizations to include in the model tidal phenomena that are currently unresolved, with particular emphasis to the topographic wave drag for locally dissipating internal waves. An additional experiment using a mesoscale-resolving configuration is used to compare the simulated tides at different resolutions with observed data. We find that the accuracy of modelled tides strongly depends on the region and harmonic component of interest, even though the increased resolution allows to improve the modelled topography and resolve more intense internal waves. We then focus on the impact of tides in the Atlantic Ocean and find that tides weaken the overturning circulation during the analysed period from 1981 to 2007, even though the interannual differences strongly change in both amplitude and phase. The zonally integrated momentum balance shows that tide changes the water stratification at the zonal boundaries, modifying the pressure and therefore the geostrophic balance over the entire basin. Finally, we describe the overturning circulation in the Mediterranean Sea computing the meridional and zonal streamfunctions both in the Eulerian and residual frameworks. The circulation is characterised by different cells, and their forcing processes are described with particular emphasis to the role of mesoscale and a transient climatic event. We complete the description of the overturning circulation giving evidence for the first time to the connection between meridional and zonal cells.
Resumo:
In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
Aim of the present study was to develop a statistical approach to define the best cut-off Copy number alterations (CNAs) calling from genomic data provided by high throughput experiments, able to predict a specific clinical end-point (early relapse, 18 months) in the context of Multiple Myeloma (MM). 743 newly diagnosed MM patients with SNPs array-derived genomic and clinical data were included in the study. CNAs were called both by a conventional (classic, CL) and an outcome-oriented (OO) method, and Progression Free Survival (PFS) hazard ratios of CNAs called by the two approaches were compared. The OO approach successfully identified patients at higher risk of relapse and the univariate survival analysis showed stronger prognostic effects for OO-defined high-risk alterations, as compared to that defined by CL approach, statistically significant for 12 CNAs. Overall, 155/743 patients relapsed within 18 months from the therapy start. A small number of OO-defined CNAs were significantly recurrent in early-relapsed patients (ER-CNAs) - amp1q, amp2p, del2p, del12p, del17p, del19p -. Two groups of patients were identified either carrying or not ≥1 ER-CNAs (249 vs. 494, respectively), the first one with significantly shorter PFS and overall survivals (OS) (PFS HR 2.15, p<0001; OS HR 2.37, p<0.0001). The risk of relapse defined by the presence of ≥1 ER-CNAs was independent from those conferred both by R-IIS 3 (HR=1.51; p=0.01) and by low quality (< stable disease) clinical response (HR=2.59 p=0.004). Notably, the type of induction therapy was not descriptive, suggesting that ER is strongly related to patients’ baseline genomic architecture. In conclusion, the OO- approach employed allowed to define CNAs-specific dynamic clonality cut-offs, improving the CNAs calls’ accuracy to identify MM patients with the highest probability to ER. As being outcome-dependent, the OO-approach is dynamic and might be adjusted according to the selected outcome variable of interest.
Resumo:
When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.
Root cause analysis applied to a finite element model's refinement of a negative stiffness structure
Resumo:
Negative Stiffness Structures are mechanical systems that require a decrease in the applied force to generate an increase in displacement. They are structures that possess special characteristics such as snap-through and bi-stability. All of these features make them particularly suitable for different applications, such as shock-absorption, vibration isolation and damping. From this point of view, they have risen awareness of their characteristics and, in order to match them to the application needed, a numerical simulation is of great interest. In this regard, this thesis is a continuation of previous studies in a circular negative stiffness structure and aims at refine the numerical model by presenting a new solution. To that end, an investigation procedure is needed. Amongst all of the methods available, root cause analysis was the chosen one to perform the investigation since it provides a clear view of the problem under analysis and a categorization of all the causes behind it. As a result of the cause-effect analysis, the main causes that have influence on the numerical results were obtained. Once all of the causes were listed, solutions to them were proposed and it led to a new numerical model. The numerical model proposed was of nonlinear type of analysis with hexagonal elements and a hyperelastic material model. The results were analyzed through force-displacement curves, allowing for the visualization of the structure’s energy recovery. When compared to the results obtained from the experimental part, it is evident that the trend is similar and the negative stiffness behaviour is present.