950 resultados para Shipping process with debit
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
Ordering in a binary alloy is studied by means of a molecular-dynamics (MD) algorithm which allows to reach the domain growth regime. Results are compared with Monte Carlo simulations using a realistic vacancy-atom (MC-VA) mechanism. At low temperatures fast growth with a dynamical exponent x>1/2 is found for MD and MC-VA. The study of a nonequilibrium ordering process with the two methods shows the importance of the nonhomogeneity of the excitations in the system for determining its macroscopic kinetics.
Resumo:
The continuous-time random walk (CTRW) formalism can be adapted to encompass stochastic processes with memory. In this paper we will show how the random combination of two different unbiased CTRWs can give rise to a process with clear drift, if one of them is a CTRW with memory. If one identifies the other one as noise, the effect can be thought of as a kind of stochastic resonance. The ultimate origin of this phenomenon is the same as that of the Parrondo paradox in game theory.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used, and Principal Component Analysis (PCA) is applied in order to study which is the best number of components for the classification task, implemented by means of a Support Vector Machine (SVM) System. Obtained results are satisfactory, and compared with [4] our system improves the recognition success, diminishing the variance at the same time.
Resumo:
In this work we present a simulation of a recognition process with perimeter characterization of a simple plant leaves as a unique discriminating parameter. Data coding allowing for independence of leaves size and orientation may penalize performance recognition for some varieties. Border description sequences are then used to characterize the leaves. Independent Component Analysis (ICA) is then applied in order to study which is the best number of components to be considered for the classification task, implemented by means of an Artificial Neural Network (ANN). Obtained results with ICA as a pre-processing tool are satisfactory, and compared with some references our system improves the recognition success up to 80.8% depending on the number of considered independent components.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Tämän tutkielman tavoitteena on tutkia kuinka roadmapping-tekniikkaa voidaan käyttää tarjonnan suunnittelun tukena uusien tuotteiden valmistamisen yhteydessä. Työ koostuu teoreettisesta ja käytännönläheisestä osasta. Teoreettinen runko on luotu selventämään kuinka tämän hetkisen tutkimus- ja kehitys projektit lopulta muodostavat tulevaisuuden tarjoaman. Menestyksekkään tuotetarjoaman luominen vaatii, sekä uusien teknologioiden kehittämistä, että markkinoilla olevien asiakkaiden tarpeiden ymmärtämistä. Asiakassuuntaisten tuotteiden kehittäminen vaatii toimintaympäristöstä ja asiakasrajapinnasta tulevien signaalien tunnistamista ja niiden ohjaamista tuote- ja teknologia platformeille. Strategia luodaan tukemaan päätöksentekoa prosessin eri vaiheissa. Yrityskohtainen osio koostuu analyysistä, joka on tehty teetetyn kyselyn ja haas-tattelujen pohjalta. Osana analyysia ovat Major project-yksikön tämänhetkinen tarjonnansuunnitteluprosessi, strategian soveltaminen, informaation kerääminen ja priorisointi, portfolionhallinta ja roadmap-tekniikan käyttö. Ratkaisussa on esitet-ty tarjonnan suunnitteluprosessi ja siihen liittyvät kriittiset komponentit. Roadmapping-tekniikkaaon luotu yhdistämään toimintaympäristö, tuotteet ja teknologia toisiinsa. Toimintaympäristö ja tuotteet on yhdistetty myös linked-grids-tekniikan avulla.
Resumo:
This work was carried out in the laboratory of Fluid Dynamics, at Lappeenranta University of Technology during the years 1991-1996. The research was a part of larger high speed technology development research. First, there was the idea of making high speed machinery applications with the Brayton cycle. There was a clear need to deepen theknowledge of the cycle itself and to make a new approach in the field of the research. Also, the removal of water from the humid air seemed very interesting. The goal of this work was to study methods of designing high speed machinery to the reversed Brayton cycle, from theoretical principles to practical applications. The reversed Brayton cycle can be employed as an air dryer, a heat pump or a refrigerating machine. In this research the use of humid air as a working fluid has an environmental advantage, as well. A new calculation method for the Braytoncycle is developed. In this method especially the expansion process in the turbine is important because of the condensation of the water vapour in the humid air. This physical phenomena can have significant effects on the level of performance of the application. Also, the influence of calculating the process with actual, achievable process equipment efficiencies is essential for the development of the future machinery. The above theoretical calculations are confirmed with two different laboratory prototypes. The high speed machinery concept allows one to build an application with only one rotating shaft including all the major parts: the high speed motor, the compressor and the turbine wheel. The use of oil free bearings and high rotational speed outlines give several advantages compared to conventional machineries: light weight, compact structure, safe operation andhigher efficiency at a large operational region. There are always problems whentheory is applied to practice. The calibrations of pressure, temperature and humidity probes were made with care but still measurable errors were not negligible. Several different separators were examined and in all cases the content of the separated water was not exact. Due to the compact sizes and structures of the prototypes, the process measurement was slightly difficult. The experimental results agree well with the theoretical calculations. These experiments prove the operation of the process and lay a ground for the further development. The results of this work give very promising possibilities for the design of new, commercially competitive applications that use high speed machinery and the reversed Brayton cycle.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
A neural network procedure to solve inverse chemical kinetic problems is discussed in this work. Rate constants are calculated from the product concentration of an irreversible consecutive reaction: the hydrogenation of Citral molecule, a process with industrial interest. Simulated and experimental data are considered. Errors in the simulated data, up to 7% in the concentrations, were assumed to investigate the robustness of the inverse procedure. Also, the proposed method is compared with two common methods in nonlinear analysis; the Simplex and Levenberg-Marquardt approaches. In all situations investigated, the neural network approach was numerically stable and robust with respect to deviations in the initial conditions or experimental noises.
Resumo:
The aim of the research was to create a comprehensive city branding process. This was done by identifying the key target groups of the city and considering them in the city branding process. Also key stakeholders were identified and taken into consideration when creating the branding process. As an empirical study, three first stages of the city branding process were implemented for the city of Lappeenranta having "students" as the case target group. An interview with the city officials was conducted, as well as a student survey on the current city image of Lappeenranta. Quantitative research methods were used to analyze the results of the survey. A comprehensive city branding process with eight stages was created in the research. Target groups were considered in the process by identifying the target group dependent stages. The empirical study revealed that the current city image held by the students consists of six dimensions. These dimensions were analyzed from the viewpoint of Lappeenranta with the help of an importance performance analysis.
Resumo:
Agile software development has grown in popularity starting from the agile manifesto declared in 2001. However there is a strong belief that the agile methods are not suitable for embedded, critical or real-time software development, even though multiple studies and cases show differently. This thesis will present a custom agile process that can be used in embedded software development. The reasons for presumed unfitness of agile methods in embedded software development have mainly based on the feeling of these methods providing no real control, no strict discipline and less rigor engineering practices. One starting point is to provide a light process with disciplined approach to the embedded software development. Agile software development has gained popularity due to the fact that there are still big issues in software development as a whole. Projects fail due to schedule slips, budget surpassing or failing to meet the business needs. This does not change when talking about embedded software development. These issues are still valid, with multiple new ones rising from the quite complex and hard domain the embedded software developers work in. These issues are another starting point for this thesis. The thesis is based heavily on Feature Driven Development, a software development methodology that can be seen as a runner up to the most popular agile methodologies. The FDD as such is quite process oriented and is lacking few practices considered commonly as extremely important in agile development methodologies. In order for FDD to gain acceptance in the software development community it needs to be modified and enhanced. This thesis presents an improved custom agile process that can be used in embedded software development projects with size varying from 10 to 500 persons. This process is based on Feature Driven Development and by suitable parts to Extreme Programming, Scrum and Agile Modeling. Finally this thesis will present how the new process responds to the common issues in the embedded software development. The process of creating the new process is evaluated at the retrospective and guidelines for such process creation work are introduced. These emphasize the agility also in the process development through early and frequent deliveries and the team work needed to create suitable process.
Resumo:
Japan has been a major actor in the field of development cooperation for five decades, even holding the title of largest donor of Official Development Assistance (ODA) during the 1990s. Financial flows, however, are subject to pre-existing paradigms that dictate both donor and recipient behaviour. In this respect Japan has been left wanting for more recognition. The dominance of the so called ‘Washington Consensus’ embodied in the International Monetary Fund (IMF) and the World Bank has long circumvented any indigenous approaches to development problems. The Tokyo International Conference on African Development (TICAD) is a development cooperation conference that Japan has hosted since 1993 every five years. As the main organizer of the conference Japan has opted for the leading position of African development. This has come in the wake of success in the Asian region where Japan has called attention to its role in the so called ‘Asian Miracle’ of fast growing economies. These aspirations have enabled Japan to try asserting itself as a major player in directing the course of global development discourse using historical narratives from both Asia and Africa. Over the years TICAD has evolved into a continuous process with ministerial and follow-up meetings in between conferences. Each conference has produced a declaration that stipulates the way the participants approach the question of African development. Although a multilateral framework, Japan has over the years made its presence more and more felt within the process. This research examines the way Japan approaches the paradigms of international development cooperation and tries to direct them in the context of the TICAD process. Supplementing these questions are inquiries concerning Japan’s foreign policy aspirations. The research shows that Japan has utilized the conference platform to contest other development actors and especially the dominant forces of the IMF and the World Bank in development discourse debate. Japan’s dominance of the process is evident in the narratives found in the conference documents. Relative success has come about by remaining consistent as shown by the acceptance of items from the TICAD agenda in other forums, such as the G8. But the emergence of new players such as China has changed the playing field, as they are engaging other developing countries from a more equal level.