966 resultados para Two-point boundary value problems
Resumo:
Osaaminen voi muodostua ongelmaksi yritysten kilpailukyvylle, jos siihen ei kiinnitetä huomiota jo strategiasuunnittelusta lähtien. Vaikka asiakkaiden muuttuneet odotukset kyettäisiinkin kohdentamaan ennen kilpailijoita, saattaa olla, että siihen ei pystytä vastaamaan, jos ei ehditä oppimaan uutta tai uudella tavalla. Diplomityön tavoitteena on laatia Etelä-Karjalan aikuisopistollehenkilöstön osaamisen kehittämiskuvaus siitä, kuinka strategian määrittämisestälähtien voidaan henkilöstön osaamista parantaa ja pyrkiä luomaan kilpailuetua markkinoilla. Henkilöstön osaamisen kehittäminen tulee olla suunnitelmallista, tarvittaessa yksilön, ryhmän ja organisaation edut huomioivaa, riittävän yksinkertaista ja konkreettista, jotta suunnitelma voidaan toteuttaa, seurata ja edelleenkehittää. Työn teoriaosassa on kuvattu vision ja strategian merkitystä osaamisen kehittämiseen. Lisäksi on tarkasteltu yksilön oppimista, oppimisen prosessia ja sen kehittymistä organisaation kyvykkyydeksi. Osaamisen infrastruktuuria on lähestytty organisaatiokulttuurin, sitouttamisen ja kehittämisjärjestelmän näkökulmasta. Empiirisessä osuudessa on tuotu esiin aikuisopiston henkilöstön osaamisen kehittämisen tavoitteet, nykyiset käytännöt, kehittämisen vaihtoehdot sekä jatkotoimenpiteet. Osaaminen on aikuisopiston henkilöstön ammattitaidon perusta. Osaamistarpeen määrittämisen tulee keskittyä aikuisopiston ydinosaamisen kehittämiseen. Henkilöstö tulisi nähdä motivoinnin ja sitouttamisen kautta inhimillistä tietopääomaa kasvattavana tekijänä, johon voidaan sujuvasti liittää aineeton pääoma (data, informaatio jne.) sekä strateginen reservi, kuten kilpailuetua tuottava innovointitoiminta.
Resumo:
Työssä tutkittiin sulfonoitujen polystyreenidivinyylibentseenirunkoisten geeli-, meso- ja makrohuokoistenioninvaihtohartsien rakennetta käyttäen useita eri karakterisointimenetelmiä. Lisäksi työssä tutkittiin hartsien huokoskoon vaikutusta aminohappojen kromatografisessa erotuksessa. Työn pääpaino oli hartsien huokoskoon ja huokoisuuden määrittämisessä. Sen selvittämiseksi käytettiin hyväksi elektronimikroskopiaa, typpiadsorptiomittauksia, sekä käänteistä kokoekskluusiokromatografiaa. Parhaat tulokset saatiin käänteisellä kokoekskluusiokromatografialla, joka perustuu erikokoisten dekstraanipolymeerien käyttöön mallimolekyyleinä. Menetelmä sopii meso- ja makrohuokoisuuden tutkimiseen, mutta sen heikkoutena on erittäin pitkä mittausaika. Menetelmä antaa myös huokoskokojakauman, mutta yhden hartsin mittaaminen voi kestää viikon. Menetelmää muutettiin siten, että käytettiin määritettävää huokoskokoaluetta kuvaavien kahden dekstraanipolymeerin seosta. Kromatografiset ajo-olosuhteet optimoitiin sellaisiksi, että injektoidussa seoksessa olevien dekstraanien vastehuiput erottuivat toisistaan. Tällöin voitiin luotettavasti määrittää tutkittavan stationaarifaasin suhteellinen huokoisuus. Tätä työssä kehitettyä nopeaa käänteiseen kokoekskluusiokromatografiaan perustuvaa menetelmää kutsutaan kaksipistemenetelmäksi. Hartsien sulfonihapporyhmien määrää ja jakautumista tutkittiin määrittämällä hartsien kationinvaihtokapasiteetti sekä tutkimalla hartsin pintaa konfokaali-Raman-spektroskopian avulla. Sulfonihapporyhmien ioninvaihtokyvyn selvittämiseksi mitattiin K+-muotoon muutetusta hartsista S/K-suhde poikkileikkauspinnasta. Tulosten perusteella hartsit olivat tasaisesti sulfonoituneet ja 95 % rikkiatomeista oli toimivassa ioninvaihtoryhmässä. Aminohappojen erotuksessa malliaineina oli lysiini, seriini ja tryptofaani. Hartsi oli NH4+-muodossa ja petitilavuus oli 91 mL. Eluenttina käytettiin vettä, jonka pH oli 10. Paras tulos saatiin virtausnopeudella 0,1 mL/min, jolla kaikki kolme aminohappoa erottuivat toisistaan Finex Oy:n mesohuokoisella KEF78-hartsilla. Muilla tutkituilla hartseilla kaikki kolme aminohappoa eivät missään ajo-olosuhteissa erottuneet täysin.
Resumo:
This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.
Resumo:
Electrical impedance tomography (EIT) is a non-invasive imaging technique that can measure cardiac-related intra-thoracic impedance changes. EIT-based cardiac output estimation relies on the assumption that the amplitude of the impedance change in the ventricular region is representative of stroke volume (SV). However, other factors such as heart motion can significantly affect this ventricular impedance change. In the present case study, a magnetic resonance imaging-based dynamic bio-impedance model fitting the morphology of a single male subject was built. Simulations were performed to evaluate the contribution of heart motion and its influence on EIT-based SV estimation. Myocardial deformation was found to be the main contributor to the ventricular impedance change (56%). However, motion-induced impedance changes showed a strong correlation (r = 0.978) with left ventricular volume. We explained this by the quasi-incompressibility of blood and myocardium. As a result, EIT achieved excellent accuracy in estimating a wide range of simulated SV values (error distribution of 0.57 ± 2.19 ml (1.02 ± 2.62%) and correlation of r = 0.996 after a two-point calibration was applied to convert impedance values to millilitres). As the model was based on one single subject, the strong correlation found between motion-induced changes and ventricular volume remains to be verified in larger datasets.
Resumo:
OBJECTIVE: To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. METHODS: A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. FINDINGS: The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2 years to consider urinary tract infection, vi) classification of 'possible typhoid' for febrile children >2 years with abdominal tenderness; and lastly vii) classification of 'likely viral infection' in case of negative results. CONCLUSION: This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials.
Resumo:
Unlike the 1/c2 approximation, where classical electrodynamics is described by the Darwin Lagrangian, here there is no Lagrangian to describe retarded (resp., advanced) classical electrodynamics up to 1/c3 for two-point charges with different masses.
Resumo:
Objective: Independently of total caloric intake, a better quality of the diet (for example, conformity to the Mediterranean diet) is associated with lower obesity risk. It is unclear whether a brief dietary assessment tool, instead of full-length comprehensive methods, can also capture this association. In addition to reduced costs, a brief tool has the interesting advantage of allowing immediate feedback to participants in interventional studies. Another relevant question is which individual items of such a brief tool are responsible for this association. We examined these associations using a 14-item tool of adherence to the Mediterranean diet as exposure and body mass index, waist circumference and waist-to-height ratio (WHtR) as outcomes. Design: Cross-sectional assessment of all participants in the"PREvención con DIeta MEDiterránea" (PREDIMED) trial. Subjects: 7,447 participants (55-80 years, 57% women) free of cardiovascular disease, but with either type 2 diabetes or $3 cardiovascular risk factors. Trained dietitians used both a validated 14-item questionnaire and a full-length validated 137-item food frequency questionnaire to assess dietary habits. Trained nurses measured weight, height and waist circumference. Results: Strong inverse linear associations between the 14-item tool and all adiposity indexes were found. For a two-point increment in the 14-item score, the multivariable-adjusted differences in WHtR were 20.0066 (95% confidence interval,- 0.0088 to 20.0049) for women and-0.0059 (-0.0079 to-0.0038) for men. The multivariable-adjusted odds ratio for a WHtR.0.6 in participants scoring $10 points versus #7 points was 0.68 (0.57 to 0.80) for women and 0.66 (0.54 to 0.80) for men. High consumption of nuts and low consumption of sweetened/carbonated beverages presented the strongest inverse associations with abdominal obesity. Conclusions: A brief 14-item tool was able to capture a strong monotonic inverse association between adherence to a good quality dietary pattern (Mediterranean diet) and obesity indexes in a population of adults at high cardiovascular risk.
Resumo:
The Bayesian paradigm is the preferred approach to evidence interpretation. It requires the evaluation of the probability of the evidence under at least two propositions. The value of the findings (i.e., our LR) will depend on these propositions and the case information, so it is crucial to identify which propositions are useful for the case at hand. Previously, a number of principles have been advanced and largely accepted for the evaluation of evidence. In the evaluation of traces involving DNA mixtures there may be more than two propositions possible. We apply these principles to some exemplar situations. We also show that in some cases, when there are no clear propositions or no defendant, a forensic scientist may be able to generate explanations to account for observations. In that case, the scientist plays a role of investigator, rather than evaluator. We believe that it is helpful for the scientist to distinguish those two roles.
Resumo:
The Gulf of Finland is said to be one of the densest operated sea areas in the world. It is a shallow and economically vulnerable sea area with dense passenger and cargo traffic of which petroleum transports have a share of over 50 %. The winter conditions add to the risks of maritime traffic in the Gulf of Finland. It is widely believed that the growth of maritime transportation will continue also in the future. The Gulf of Finland is surrounded by three very different national economies with, different maritime transportation structures. Finland is a country of high GDP/per capita with a diversified economic structure. The number of ports is large and the maritime transportation consists of many types of cargoes: raw materials, industrial products, consumer goods, coal and petroleum products, and the Russian transit traffic of e.g. new cars and consumer goods. Russia is a large country with huge growth potential; in recent years, the expansion of petroleum exports has lead to a strong economic growth, which is also apparent in the growth of maritime transports. Russia has been expanding its port activities in the Gulf of Finland and it is officially aiming to transport its own imports and exports through the Russian ports in the future; now they are being transported to great extend through the Finnish, Estonian and other Baltic ports. Russia has five ports in the Gulf of Finland. Estonia has also experienced fast economic growth, but the growth has been slowing down already during the past couples of years. The size of its economy is small compared to Russia, which means the transported tonnes cannot be very massive. However, relatively large amounts of the Russian petroleum exports have been transported through the Estonian ports. The future of the Russian transit traffic in Estonia looks nevertheless uncertain and it remains to be seen how it will develop and if Estonia is able to find replacing cargoes if the Russian transit traffic will come to an end in the Estonian ports. Estonia’s own import and export consists of forestry products, metals or other raw materials and consumer goods. Estonia has many ports on the shores of the Gulf of Finland, but the port of Tallinn dominates the cargo volumes. In 2007, 263 M tonnes of cargoes were transported in the maritime traffic in the Gulf of Finland, of which the share of petroleum products was 56 %. 23 % of the cargoes were loaded or unloaded in the Finnish ports, 60 % in the Russian ports and 17 % in the Estonian ports. The largest ports were Primorsk (74.2 M tonnes) St. Petersburg (59.5 M tonnes), Tallinn (35.9 M tonnes), Sköldvik (19.8 M tonnes), Vysotsk (16.5 M tonnes) and Helsinki (13.4 M) tonnes. Approximately 53 600 ship calls were made in the ports of the Gulf of Finland. The densest traffic was found in the ports of St. Petersburg (14 651 ship calls), Helsinki (11 727 ship calls) and Tallinn (10 614 ship calls) in 2007. The transportation scenarios are usually based on the assumption that the amount of transports follows the development of the economy, although also other factors influence the development of transportation, e.g. government policy, environmental aspects, and social and behavioural trends. The relationship between the development of transportation and the economy is usually analyzed in terms of the development of GDP and trade. When the GDP grows to a certain level, especially the international transports increase because countries of high GDP produce, consume and thus transport more. An effective transportation system is also a precondition for the economic development. In this study, the following factors were taken into consideration when formulating the future scenarios: maritime transportation in the Gulf of Finland 2007, economic development, development of key industries, development of infrastructure and environmental aspects in relation to maritime transportation. The basic starting points for the three alternative scenarios were: • the slow growth scenario: economic recession • the average growth scenario: economy will recover quickly from current instability • the strong growth scenario: the most optimistic views on development will realize According to the slow growth scenario, the total tonnes for the maritime transportation in the Gulf of Finland would be 322.4 M tonnes in 2015, which would mean a growth of 23 % compared to 2007. In the average growth scenario, the total tonnes were estimated to be 431.6 M tonnes – a growth of 64 %, and in the strong growth scenario 507.2 M tonnes – a growth of 93%. These tonnes were further divided into petroleum products and other cargoes by country, into export, import and domestic traffic by country, and between the ports. For petroleum products, the share of crude oil and oil products was estimated and the number of tanker calls in 2015 was calculated for each scenario. However, the future development of maritime transportation in the GoF is dependent on so many societal and economic variables that it is not realistic to predict one exact point estimate value for the cargo tonnes for a certain scenario. Plenty of uncertainty is related both to the degree in which the scenario will come true as well as to the cause-effect relations between the different variables. For these reasons, probability distributions for each scenario were formulated by an expert group. As a result, a range for the total tonnes of each scenario was formulated and they are as follows: the slow growth scenario: 280.8 – 363 M tonnes (expectation value 322.4 M tonnes)
Resumo:
In this paper I discuss the intuition behind Frege's and Russell's definitions of numbers as sets, as well as Benacerraf's criticism of it. I argue that Benacerraf's argument is not as strong as some philosophers tend to think. Moreover, I examine an alternative to the Fregean-Russellian definition of numbers proposed by Maddy, and point out some problems faced by it.
Resumo:
This paper presents the development of a two-dimensional interactive software environment for structural analysis and optimization based on object-oriented programming using the C++ language. The main feature of the software is the effective integration of several computational tools into graphical user interfaces implemented in the Windows-98 and Windows-NT operating systems. The interfaces simplify data specification in the simulation and optimization of two-dimensional linear elastic problems. NURBS have been used in the software modules to represent geometric and graphical data. Extensions to the analysis of three-dimensional problems have been implemented and are also discussed in this paper.
Resumo:
The superconducting gap is a basic character of a superconductor. While the cuprates and conventional phonon-mediated superconductors are characterized by distinct d- and s-wave pairing symmetries with nodal and nodeless gap distributions respectively, the superconducting gap distributions in iron-based superconductors are rather diversified. While nodeless gap distributions have been directly observed in Ba1–xKxFe2As2, BaFe2–xCoxAs2, LiFeAs, KxFe2–ySe2, and FeTe1–xSex, the signatures of a nodal superconducting gap have been reported in LaOFeP, LiFeP, FeSe, KFe2As2, BaFe2–xRuxAs2, and BaFe2(As1–xPx)2. Due to the multiplicity of the Fermi surface in these compounds s± and d pairing states can be both nodeless and nodal. A nontrivial orbital structure of the order parameter, in particular the presence of the gap nodes, leads to effects in which the disorder is much richer in dx2–y2-wave superconductors than in conventional materials. In contrast to the s-wave case, the Anderson theorem does not work, and nonmagnetic impurities exhibit a strong pair-breaking influence. In addition, a finite concentration of disorder produces a nonzero density of quasiparticle states at zero energy, which results in a considerable modification of the thermodynamic and transport properties at low temperatures. The influence of order parameter symmetry on the vortex core structure in iron-based pnictide and chalcogenide superconductors has been investigated in the framework of quasiclassical Eilenberger equations. The main results of the thesis are as follows. The vortex core characteristics, such as, cutoff parameter, ξh, and core size, ξ2, determined as the distance at which density of the vortex supercurrent reaches its maximum, are calculated in wide temperature, impurity scattering rate, and magnetic field ranges. The cutoff parameter, ξh(B; T; Г), determines the form factor of the flux-line lattice, which can be obtained in _SR, NMR, and SANS experiments. A comparison among the applied pairing symmetries is done. In contrast to s-wave systems, in dx2–y2-wave superconductors, ξh/ξc2 always increases with the scattering rate Г. Field dependence of the cutoff parameter affects strongly on the second moment of the magnetic field distributions, resulting in a significant difference with nonlocal London theory. It is found that normalized ξ2/ξc2(B/Bc2) dependence is increasing with pair-breaking impurity scattering (interband scattering for s±-wave and intraband impurity scattering for d-wave superconductors). Here, ξc2 is the Ginzburg-Landau coherence length determined from the upper critical field Bc2 = Φ0/2πξ2 c2, where Φ0 is a flux quantum. Two types of ξ2/ξc2 magnetic field dependences are obtained for s± superconductors. It has a minimum at low temperatures and small impurity scattering transforming in monotonously decreasing function at strong scattering and high temperatures. The second kind of this dependence has been also found for d-wave superconductors at intermediate and high temperatures. In contrast, impurity scattering results in decreasing of ξ2/ξc2(B/Bc2) dependence in s++ superconductors. A reasonable agreement between calculated ξh/ξc2 values and those obtained experimentally in nonstoichiometric BaFe2–xCoxAs2 (μSR) and stoichiometric LiFeAs (SANS) was found. The values of ξh/ξc2 are much less than one in case of the first compound and much more than one for the other compound. This is explained by different influence of two factors: the value of impurity scattering rate and pairing symmetry.
Resumo:
An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.
Resumo:
Leveraging cloud services, companies and organizations can significantly improve their efficiency, as well as building novel business opportunities. Cloud computing offers various advantages to companies while having some risks for them too. Advantages offered by service providers are mostly about efficiency and reliability while risks of cloud computing are mostly about security problems. Problems with security of the cloud still demand significant attention in order to tackle the potential problems. Security problems in the cloud as security problems in any area of computing, can not be fully tackled. However creating novel and new solutions can be used by service providers to mitigate the potential threats to a large extent. Looking at the security problem from a very high perspective, there are two focus directions. Security problems that threaten service user’s security and privacy are at one side. On the other hand, security problems that threaten service provider’s security and privacy are on the other side. Both kinds of threats should mostly be detected and mitigated by service providers. Looking a bit closer to the problem, mitigating security problems that target providers can protect both service provider and the user. However, the focus of research community mostly is to provide solutions to protect cloud users. A significant research effort has been put in protecting cloud tenants against external attacks. However, attacks that are originated from elastic, on-demand and legitimate cloud resources should still be considered seriously. The cloud-based botnet or botcloud is one of the prevalent cases of cloud resource misuses. Unfortunately, some of the cloud’s essential characteristics enable criminals to form reliable and low cost botclouds in a short time. In this paper, we present a system that helps to detect distributed infected Virtual Machines (VMs) acting as elements of botclouds. Based on a set of botnet related system level symptoms, our system groups VMs. Grouping VMs helps to separate infected VMs from others and narrows down the target group under inspection. Our system takes advantages of Virtual Machine Introspection (VMI) and data mining techniques.
Resumo:
In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.