928 resultados para test case generation
Resumo:
Purpose – Employee turnover entails considerable costs and is a major problem for the construction industry. By creating an extensive framework, this study aims to examine whether perceived work-related factors affect turnover intention in South Korean construction companies. Research design – The paper is based on the results of a questionnaire of 136 employees that was conducted and provided by a Korean construction company. Research hypotheses were tested via correlation analyses. The most influencing work-related factors, as well as differences among job levels, were determined by multiple regression analyses. Findings – Communication, immediate leaders, organizational commitment, and organizational pride substantially affect turnover intentions. All of these factors can be considered as relational factors. The most influencing factors differ among job levels. Discussion/practical implications – Immediate leaders should be aware of their role in retaining employees and enhance communication, organizational commitment and pride. This study shows how the importance of certain variables differs for groups of employees. Theoretical implications/limitations– This study is based on a sample of employees from a Korean construction company. Therefore, the generalizability of the findings has to be tested. Future research should test the proposed framework with other factors or resources. Originality/value – This study shed light on the turnover subject in the South Korean construction industry. It shows that different factors can influence turnover intention among job levels. A framework was created, which is based on 16 work-related factors including organizational factors, HRM practices and job attitudes.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
Purpose – Employee turnover entails considerable costs and is a major problem for the construction industry. By creating an extensive framework, this study aims to examine whether perceived work-related factors affect turnover intention in South Korean construction companies. Research design – The paper is based on the results of a questionnaire of 136 employees that was conducted and provided by a Korean construction company. Research hypotheses were tested via correlation analyses. The most influencing work-related factors, as well as differences among job levels, were determined by multiple regression analyses. Findings – Communication, immediate leaders, organizational commitment, and organizational pride substantially affect turnover intentions. All of these factors can be considered as relational factors. The most influencing factors differ among job levels. Discussion/practical implications – Immediate leaders should be aware of their role in retaining employees and enhance communication, organizational commitment and pride. This study shows how the importance of certain variables differs for groups of employees. Theoretical implications/limitations– This study is based on a sample of employees from a Korean construction company. Therefore, the generalizability of the findings has to be tested. Future research should test the proposed framework with other factors or resources. Originality/value – This study shed light on the turnover subject in the South Korean construction industry. It shows that different factors can influence turnover intention among job levels. A framework was created, which is based on 16 work-related factors including organizational factors, HRM practices and job attitudes.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.
Resumo:
A gulf has tended to develop between the adoption and usage of information technology by different generations, at the heart of which is different ways of experiencing and relating to the world around us. This research idea is currently being developed following data collection and feedback is sought on ways forward to enable impact. The research focuses on information technology in the form of multimedia. Multimedia meaning ‘media’ and ‘content’ that uses a combination of different content forms; or electronically integrated communication engaging all or most of the senses (e.g. graphic art, sound, animation and full-motion video presented by way of computer or other electronic means) mainly through presentational technologies. Although multimedia is not new, some organization’s particularly those in the non-profit sector do not always have the technical or financial resources to support such systems and consequently may struggle to adopt and support its usage amongst different generations. However non-profit organizations are being forced to pay more attention to the way they communicate with markets and the public due to the professionalism of communication everywhere in society. The case study used for this study is a church circuit comprising of 15 churches in the Midlands region of the United Kingdom which was selected due to the diverse age groups catered for within this type of non-profit organization. Participants in the study also had a range of skills, experiences and backgrounds which adds to the diversity of the population studied. Data gathered focused on the attitudes and opinions of the adoption and use of multimedia amongst different age groups. 395 questionnaires were distributed, comprising of 11 opinion questions and 4 demographic questions. 83% of the questionnaires were returned, representing 35% of the total circuit membership. Three people from each of the following age categories were also interviewed: 1920 – 1946 (Matures); 1947-1964 (Baby Boomers); 1965-1982 (Generation X); 1983-2004 (Net Generation). Results of the questionnaire and comments from the interviews were found not to tally with the widespread assumption that the younger generation is attracted by the use of multimedia in comparison to the older generation. The highest proportion of those who said that they gain more from a service enhanced by multimedia was from the Baby Boomers. Comments from interviews suggested that: ‘we need to embrace multimedia if we are to attract and retain the younger generation’; ‘multimedia often helps children to remain focused and clarifies the objective of the service’. However, because the younger generations’ world tends to be dominated by computer technology the questionnaire showed that they are more likely to have higher standards when it comes to the use of multimedia, such as identifying higher levels of equipment failing to work and annoying use of sounds compared to older age groups. In comparison problems experienced with multimedia for the Matures age group had the highest percentage of difficulty with the size of letters; the colour of letters and background and the sound not loud enough which is to be expected. Since every organization is unique any type of multimedia adopted and used should be specific to their needs, its stakeholders and the physical building in order to enhance that uniqueness and its needs. Giving thought to whether the type of multimedia is the best method for communicating the message to the particular audience alongside how technical and financial resources are best used can assist in accommodating different age groups that need to be catered for.
Resumo:
Rin de Aveiro is a coastal lagoon located at the Central Region of Portugal subjected to the influence of the tides, resulting in a set of characteristic biotopes favouring anthropic and natural processes. Once managed and controlled correctly, each of these biotopes will allow simultaneously the biodiversity and integration in the making of the wetland landscape. In 1998, one of the final conclusions of the "MARIA" Demonstration Programme for the Integrated Management of Ria de Aveiro was that the poor current state of the environment area resulted from a set of interrelated factors. The Programme selected four (4) pilot-projects towards the integrated management of the lagoon biotopes as possible scenarios for an intervention. This selection was based in criteria related to environmental priorities and the maintenance of traditional economic activities in the region. The idea of choosing projects that would involve the whole geographic space of the Ria, without forgetting the other important themes interrelated with the Management Structure, emerged as a relevant aspect for their definition. Thus, and as a first test of this Management Structure functionality, the following task forces were put forward: Recovery and valorisation of the piers; Recovery of the former salt pans; Management of the agricultural fields of Baixo-Vouga; Implementation of measures for the classification of the Protected Landscape Area of the River Caster Mouth. This payer will report the main results of these pilot-projects attained during their first year period, especially the intervention strategies defined by the Partnership created for this aim.
Resumo:
The purpose of this study is to investigate two candidate waveforms for next generation wireless systems, filtered Orthogonal Frequency Division Multiplexing (f-OFDM) and Unified Filtered Multi-Carrier (UFMC). The evaluation is done based on the power spectral density analysis of the signal and performance measurements in synchronous and asynchronous transmission. In f-OFDM we implement a soft truncated filter with length 1/3 of OFDM symbol. In UFMC we use the Dolph-Chebyshev filter, limited to the length of zero padding (ZP). The simulation results demonstrates that both waveforms have a better spectral behaviour compared with conventional OFDM. However, the induced inter-symbol interference (ISI) caused by the filter in f-OFDM, and the inter-carrier interference (ICI) induced in UFMC due to cyclic prefix (CP) reduction , should be kept under control. In addition, in a synchronous transmission case with ideal parameters, f-OFDM and UFMC appear to have similar performance with OFDM. When carrier frequency offset (CFO) is imposed in the transmission, UFMC outperforms OFDM and f-OFDM.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
On 28 July 2010, the Nigerian Federal Executive Council approved January 1, 2012 as the effective date for the convergence of Nigerian Statement of Accounting Standards (SAS) or Nigerian GAAP (NG-GAAP) with International Financial Reporting Standards (IFRS). By this pronouncement, all publicly listed companies and significant public interest entities in Nigeria were statutorily required to issue IFRS based financial statements for the year ended December, 2012. This study investigates the impact of the adoption of IFRS on the financial statements of Nigerian listed Oil and Gas entities using six years of data which covers three years before and three years after IFRS adoption in Nigeria and other African countries. First, the study evaluates the impact of IFRS adoption on the Exploration and Evaluation (E&E) expenditures of listed Oil and Gas companies. Second, it examines the impact of IFRS adoption on the provision for decommissioning of Oil and Gas installations and environmental rehabilitation expenditures. Third, the study analyses the impact of the adoption of IFRS on the average daily Crude Oil production cost per Barrel. Fourth, it examines the extent to which the adoption and implementation of IFRS affects the Key Performance Indicators (KPIs) of listed Oil and Gas companies. The study further explores the impact of IFRS adoption on the contractual relationships between Nigerian Government and Oil and Gas companies in terms of Joint Ventures (JVs) and Production Sharing Contracts (PSCs) as it relates to taxes, royalties, bonuses and Profit Oil Split. A Paired Samples t-test, Wilcoxon Signed Rank test and Gray’s (Gray, 1980) Index of Conservatism analyses were conducted simultaneously where the accounting numbers, financial ratios and industry specific performance measures of GAAP and IFRS were computed and analysed and the significance of the differences of the mean, median and Conservatism Index values were compared before and after IFRS adoption. Questionnaires were then administered to the key stakeholders in the adoption and implementation of IFRS and the responses collated and analysed. The results of the analyses reveal that most of the accounting numbers, financial ratios and industry specific performance measures examined changed significantly as a result of the transition from GAAP to IFRS. The E&E expenditures and the mean cost of Crude Oil production per barrel of Oil and Gas companies increased significantly. The GAAP values of inventories, GPM, ROA, Equity and TA were also significantly different from the IFRS values. However, the differences in the provision for decommissioning expenditures were not statistically significant. Gray’s (Gray, 1980) Conservatism Index shows that Oil and Gas companies were more conservative under GAAP when compared to the IFRS regime. The Questionnaire analyses reveal that IFRS based financial statements are of higher quality, easier to prepare and present to management and easier to compare among competitors across the Oil and Gas sector but slightly more difficult to audit compared to GAAP based financial statements. To my knowledge, this is the first empirical research to investigate the impact of IFRS adoption on the financial statements of listed Oil and Gas companies. The study will therefore make an enormous contribution to academic literature and body of knowledge and void the existing knowledge gap regarding the impact and implications of IFRS adoption on the financial statements of Oil and Gas companies.
Resumo:
Aim. We report a case of ulnar and palmar arch artery aneurysm in a 77 years old man without history of any occupational or recreational trauma, vasculitis, infections or congenital anatomic abnormalities. We also performed a computed search of literature in PUBMED using the keywords “ulnar artery aneurysm” and “palmar arch aneurysm”. Case report. A 77 years old male patient was admitted to hospital with a pulsing mass at distal right ulnar artery and deep palmar arch; at ultrasound and CT examination a saccular aneurysm of 35 millimeters at right ulnar artery and a 15 millimeters dilatation at deep palmar arch were detected. He was asymptomatic for distal embolization and pain. In local anesthesia ulnar artery and deep palmar arch dilatations were resected. Reconstruction of vessels was performed through an end-to-end microvascular repair. Histological examination confirmed the absence of vasculitis and collagenopaties. In postoperative period there were no clinical signs of peripheral ischemia, Allen’s test and ultrasound examination were normal. At follow-up of six months, the patient was still asymptomatic with a normal Allen test, no signs of distal digital ischemia and patency of treated vessel with normal flow at duplex ultrasound. Conclusion. True spontaneous aneurysms of ulnar artery and palmar arch are rare and can be successfully treated with resection and microvascular reconstruction.
Resumo:
This paper analyzes the dynamics ofthe American Depositary Receipt (ADR) of a Colombian bank (Bancolombia) in relation to its pricing factors (underlying (preferred) shares price, exchange rate and the US market index). The aim is to test if there is a long-term relation among these variables that would imply predictability. One cointegrating relation is found allowing the use of a vector error correction model to examine the transmission of shocks to the underlying prices, the exchange rate, and the US market index. The main finding of this paper is that in the short run, the underlying share price seems to adjust after changes in the ADR price, pointing to the fact that the NYSE (trading market for the ADR) leads the Colombian market. However, in the long run, both, the underlying share price and the ADR price, adjust to changes in one another.