938 resultados para Closed time-like curves
Resumo:
The thesis deals with some of the non-linear Gaussian and non-Gaussian time models and mainly concentrated in studying the properties and application of a first order autoregressive process with Cauchy marginal distribution. In this thesis some of the non-linear Gaussian and non-Gaussian time series models and mainly concentrated in studying the properties and application of a order autoregressive process with Cauchy marginal distribution. Time series relating to prices, consumptions, money in circulation, bank deposits and bank clearing, sales and profit in a departmental store, national income and foreign exchange reserves, prices and dividend of shares in a stock exchange etc. are examples of economic and business time series. The thesis discuses the application of a threshold autoregressive(TAR) model, try to fit this model to a time series data. Another important non-linear model is the ARCH model, and the third model is the TARCH model. The main objective here is to identify an appropriate model to a given set of data. The data considered are the daily coconut oil prices for a period of three years. Since it is a price data the consecutive prices may not be independent and hence a time series based model is more appropriate. In this study the properties like ergodicity, mixing property and time reversibility and also various estimation procedures used to estimate the unknown parameters of the process.
Resumo:
Time and space resolved studies of emission from CN molecules have been carried out in the plasma produced from graphite target by 1.06 urn pulses from a Q-switched Nd:YAG laser. Depending on the laser pulse energy, time of observation and position of the sampled volume of the plasma, the features of the emission spectrum are found to change drastically. The vibrational temperature and population distribution in the different vibrational levels have been studied as functions of distance, time, laser energy and ambient gas pressure. Evidence for nonlinear effects of the plasma medium such as self focusing which exhibits threshold-like behaviour are also obtained. Temperature and electron density of the plasma have been evaluated using the relative line intensities of successive ionization stages of carbon atom. These electron density measurements are verified by using Stark broadening method.
Resumo:
An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.
Resumo:
An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.
Resumo:
Leachate from an untreated landfill or landfill with damaged liners will cause the pollution of soil and ground water. Here an attempt was made to generate knowledge on concentrations of all relevant pollutants in soil due to municipal solid waste landfill leachate and its migration through soil and also to study the effect of leachate on the engineering properties of soil. To identify the pollutants in soil due to the leachate generated from municipal solid waste landfill site, a case study on an unlined municipal solid waste landfill at Kalamassery has been done. Soil samples as well as water samples were collected from the site and analysed to identify the pollutants and its effect on soil characteristics. The major chemicals in the soil were identified as Ammonia, Chloride, Nitrate, Iron, Nickel, Chromium, Cadmium etc.. Engineering properties of field soil samples show that the chemicals from the leachate of landfill may have effect on the engineering properties of soil. Laboratory experiments were formulated to model the field around an unlined MSW landfill using two different soils subjected to a synthetic leachate. The Maximum change in chemical concentration and engineering property was observed on soil samples at a radial distance of 0.2 m and at a depth of 0.3 m. The pollutant (chemicals) transport pattern through the soil was also studied using synthetic leachate. To establish the effect of pollutants (chemicals) on engineering properties of soil, experiments were conducted on two types soils treated with the synthetic chemicals at four different concentrations. Analyses were conducted after maturing periods of 7, 50, 100 and 150 days. Test soils treated with maximum chemical concentration and matured for 150 days were showing major change in the properties. To visualize the flow of pollutants through soil in a broader sense, the transportation of pollutants through soil was modeled using software ‘Visual MODFLOW’. The actual field data collected for the case study was used to calibrate the modelling and thus simulated the flow pattern of the pollutants through soil around Kalamassery municipal solid waste landfill for an extent of 4 km2. Flow was analysed for a time span of 30 years in which the landfill was closed after 20 years. The concentration of leachate beneath the landfill was observed to be reduced considerably within one year after closure of landfill and within 8 years, it gets lowered to a negligible level. As an environmensstal management measure to control the pollution through leachate, permeable reactive barriers are used as an emerging technology. Here the suitability of locally available materials like coir pith, rice husk and sugar cane bagasse were investigated as reactive media in permeable reactive barrier. The test results illustrates that, among these, coir pith was showing better performance with maximum percentage reduction in concentration of the filtrate. All these three agricultural wastes can be effectively utilized as a reactive material. This research establishes the influence of leachate of municipal solid waste landfill on the engineering properties of soil. The factors such as type of the soil, composition of leachate, infiltration rate, aquifers, ground water table etc., will have a major role on the area of influence zone of the pollutants in a landfill. Software models of the landfill area can be used to predict the extent and the time span of pollution of a landfill, by inputting the accurate field parameters and leachate characteristics. The present study throws light on the role of agro waste materials on the reduction of the pollution in leachate and thus prevents the groundwater and soil from contamination
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
The evolution of coast through geological time scale is dependent on the transgression-regression event subsequent to the rise or fall of sea level. This event is accounted by investigation of the vertical sediment deposition patterns and their interrelationship for paleo-enviornmental reconstruction. Different methods like sedimentological (grain size and micro-morphological) and geochemical (elemental relationship) analyses as well as radiocarbon dating are generally used to decipher the sea level changes and paleoclimatic conditions of the Quaternary sediment sequence. For the Indian coast with a coastline length of about 7500 km, studies on geological and geomorphological signatures of sea level changes during the Quaternary were reported in general by researchers during the last two decades. However, for the southwest coast of India particularily Kerala which is famous for its coastal landforms comprising of estuaries, lagoons, backwaters, coastal plains, cliffs and barrier beaches, studies pertaining to the marine transgression-regression events in the southern region are limited. The Neendakara-Kayamkulam coastal stretch in central Kerala where the coast is manifested with shore parallel Kayamkulam Lagoon on one side and shore perpendicular Ashtamudi Estuary on the other side indicating existence of an uplifted prograded coastal margin followed by barrier beaches, backwater channels, ridge and runnel topography is an ideal site for studying such events. Hence the present study has been taken up in this context to address the gap area. The location for collection of core samples representing coastal plain, estuarylagoon and offshore regions have been identified based on published literature and available sedimentary records. The objectives of the research work are: To study the lithological variations and depositional environments of sediment cores along the coastal plain, estuary-lagoon and offshore regions between Kollam and Kayamkulam in the central Kerala coast To study the transportation and diagenetic history of sediments in the area To investigate the geochemical characterization of sediments and to elucidate the source-sink relationship To understand the marine transgression-regression events and to propose a conceptual model for the region The thesis comprises of 8 chapters. The first chapter embodies the preamble for the selection and significance of this research work. The study area is introduced with details on its physiographical, geological, geomorphological, rainfall and climate information. A review of literature, compiling the research on different aspects such as physico-chemical, geomorphological, tectonics, transgression-regression events are presented in the second chapter and they are broadly classified into three viz:- International, National and Kerala. The field data collection and laboratory analyses adopted in the research work are discussed in the third chapter. For collection of sediment core samples from the coastal plains, rotary drilling method was employed whereas for the estuary-lagoon and offshore locations the gravity/piston corer method was adopted. The collected subsurficial samples were analysed for texture, surface micro-texture, elemental analysis, XRD and radiocarbon dating techniques for age determination. The fourth chapter deals with the textural analysis of the core samples collected from various predefined locations of the study area. The result reveals that the Ashtamudi Estuary is composed of silty clay to clayey type of sediments whereas offshore cores are carpeted with silty clay to relict sand. Investigation of the source of sediments deposited in the coastal plain located on either side of the estuary indicates the dominance of terrigenous to marine origin in the southern region whereas it is predominantly of marine origin towards the north. Further the hydrodynamic conditions as well as the depositional enviornment of the sediment cores are elucidated based on statistical parameters that decipher the deposition pattern at various locations viz., coastal plain (open to closed basin), Ashtamudi Estuary (partially open to restricted estuary to closed basin) and offshore (open channel). The intensity of clay minerals is also discussed. From the results of radiocarbon dating the sediment depositional environments were deciphered.The results of the microtextural study of sediment samples (quartz grains) using Scanning Electron Microscope (SEM) are presented in the fifth chapter. These results throw light on the processes of transport and diagenetic history of the detrital sediments. Based on the lithological variations, selected quartz grains of different environments were also analysed. The study indicates that the southern coastal plain sediments were transported and deposited mechanically under fluvial environment followed by diagenesis under prolonged marine incursion. But in the case of the northern coastal plain, the sediments were transported and deposited under littoral environment indicating the dominance of marine incursion through mechanical as well as chemical processes. The quartz grains of the Ashtamudi Estuary indicate fluvial origin. The surface texture features of the offshore sediments suggest that the quartz grains are of littoral origin and represent the relict beach deposits. The geochemical characterisation of sediment cores based on geochemical classification, sediment maturity, palaeo-weathering and provenance in different environments are discussed in the sixth chapter. In the seventh chapter the integration of multiproxies data along with radiocarbon dates are presented and finally evolution and depositional history based on transgression–regression events is deciphered. The eighth chapter summarizes the major findings and conclusions of the study with recommendation for future work.
Resumo:
The rise of the English novel needs rethinking after it has been confined to the "formal realism" of Defoe, Richardson, and Fielding (Watt, 1957), to "antecedents, forerunners" (Schlauch, 1968; Klein, 1970) or to mere "prose fiction" (McKillop, 1951; Davis, Richetti, 1969; Fish, 1971; Salzman, 1985; Kroll, 1998). My paper updates a book by Jusserand under the same title (1890) by proving that the social and moral history of the long prose genre admits no strict separation of "novel" and "romance", as both concepts are intertwined in most fiction (Cuddon, Preston, 1999; Mayer, 2000). The rise of the novel, seen in its European context, mainly in France and Spain (Kirsch, 1986), and equally in England, was due to the melting of the nobility and high bourgeoisie into a "meritocracy", or to its failure, to become the new bearer of the national culture, around 1600. (Brink, 1998). My paper will concentrate on Euphues (1578), a negative romance, Euphues and His England (1580), a novel of manners, both by Lyly; Arcadia (1590-93) by Sidney, a political roman à clef in the disguise of a Greek pastoral romance; The Unfortunate Traveller (1594) by Nashe, the first English picaresque novel, and on Jack of Newbury (1596-97) by Deloney, the first English bourgeois novel. My analysis of the central values in these novels will prove a transition from the aristocratic cardinal virtues of WISDOM, JUSTICE, COURAGE, and HONOUR to the bourgeois values of CLEVERNESS, FAIR PLAY, INDUSTRY, and VIRGINITY. A similar change took place from the Christian virtues of LOVE, FAITH, HOPE to business values like SERVICE, TRUST, and OPTIMISM. Thus, the legacy of history proves that the main concepts of the novel of manners, of political romance, of picaresque and middle-class fiction were all developed in the time of Shakespeare.
Resumo:
Nach 35 Jahren Entwicklungszeit wurde im Jahr 2004 in Shanghai die erste kommerzielle Anwendung des innovativen Magnetbahnsystems Transrapid in Betrieb genommen; in Deutschland konnte bislang keine Transrapid-Strecke realisiert werden, obwohl dieses System entsprechend den Ergebnissen einer vom damaligen Bundesverkehrsminister beauftragten Studie aus dem Jahr 1972 für den Einsatz in Deutschland entwickelt wurde. Beim Transrapid handelt es sich um eine echte Produkt-Innovation im Bahnverkehr und nicht um eine Weiterentwicklung oder Optimierung wie beim ICE, und ist somit als innovativer Verkehrsträger der Zukunft in die langfristige Entwicklung der Verkehrssysteme einzufügen. Die modernen HGV Bahnsysteme (Shinkansen/TGV/ICE) hingegen sind, ähnlich der Clipper in der Phase der Segelschifffahrt im Übergang zum Dampfschiff, letzte Abwehrentwicklungen eines am Zenit angekommenen Schienen-Verkehrssystems. Die Einführung von Innovationen in einen geschlossenen Markt stellt sich als schwierig dar, da sie zu einem Bruch innerhalb eines etablierten Systems führen. Somit wird in der vorliegenden Arbeit im ersten Teil der Themenkomplex Innovation und die Einordnung der Magnet-Schwebe-Technologie in diese langfristig strukturierten Abläufe untersucht und dargestellt. Das Transrapid-Projekt ist demzufolge in eine zeitstrukturelle Zyklizität einzuordnen, die dafür spricht, die Realisierung des Gesamtprojektes in eine Zeitspanne von 20 bis 30 Jahre zu verlagern. Im zweiten Teil wird auf der Basis einer regionalstrukturellen Analyse der Bundesrepublik Deutschland ein mögliches Transrapidnetz entworfen und die in diesem Netz möglichen Reisezeiten simuliert. Weiterhin werden die Veränderungen in den Erreichbarkeiten der einzelnen Regionen aufgrund ihrer Erschließung durch das Transrapidnetz simuliert und grafisch dargestellt. Die vorliegende Analyse der zeitlichen Feinstruktur eines perspektiven Transrapidnetzes ist ein modellhafter Orientierungsrahmen für die Objektivierung von Zeitvorteilen einer abgestimmten Infrastruktur im Vergleich zu real möglichen Reisezeiten innerhalb Deutschlands mit den gegebenen Verkehrsträgern Schiene, Straße, Luft. So würde der Einsatz des Transrapid auf einem entsprechenden eigenständigen Netz die dezentrale Konzentration von Agglomerationen in Deutschland fördern und im Durchschnitt annähernd 1 h kürzere Reisezeiten als mit den aktuellen Verkehrsträgern ermöglichen. Zusätzlich wird noch ein Ausblick über mögliche Realisierungsschritte eines Gesamtnetzes gegeben und die aufgetretenen Schwierigkeiten bei der Einführung des innovativen Verkehrssystems Transrapid dargestellt.
Resumo:
In der vorliegenden Arbeit wurde gezeigt, wie mit Hilfe der atomaren Vielteilchenstörungstheorie totale Energien und auch Anregungsenergien von Atomen und Ionen berechnet werden können. Dabei war es zunächst erforderlich, die Störungsreihen mit Hilfe computeralgebraischer Methoden herzuleiten. Mit Hilfe des hierbei entwickelten Maple-Programmpaketes APEX wurde dies für geschlossenschalige Systeme und Systeme mit einem aktiven Elektron bzw. Loch bis zur vierten Ordnung durchgeführt, wobei die entsprechenden Terme aufgrund ihrer großen Anzahl hier nicht wiedergegeben werden konnten. Als nächster Schritt erfolgte die analytische Winkelreduktion unter Anwendung des Maple-Programmpaketes RACAH, was zu diesem Zwecke entsprechend angepasst und weiterentwickelt wurde. Erst hier wurde von der Kugelsymmetrie des atomaren Referenzzustandes Gebrauch gemacht. Eine erhebliche Vereinfachung der Störungsterme war die Folge. Der zweite Teil dieser Arbeit befasst sich mit der numerischen Auswertung der bisher rein analytisch behandelten Störungsreihen. Dazu wurde, aufbauend auf dem Fortran-Programmpaket Ratip, ein Dirac-Fock-Programm für geschlossenschalige Systeme entwickelt, welches auf der in Kapitel 3 dargestellen Matrix-Dirac-Fock-Methode beruht. Innerhalb dieser Umgebung war es nun möglich, die Störungsterme numerisch auszuwerten. Dabei zeigte sich schnell, dass dies nur dann in einem angemessenen Zeitrahmen stattfinden kann, wenn die entsprechenden Radialintegrale im Hauptspeicher des Computers gehalten werden. Wegen der sehr hohen Anzahl dieser Integrale stellte dies auch hohe Ansprüche an die verwendete Hardware. Das war auch insbesondere der Grund dafür, dass die Korrekturen dritter Ordnung nur teilweise und die vierter Ordnung gar nicht berechnet werden konnten. Schließlich wurden die Korrelationsenergien He-artiger Systeme sowie von Neon, Argon und Quecksilber berechnet und mit Literaturwerten verglichen. Außerdem wurden noch Li-artige Systeme, Natrium, Kalium und Thallium untersucht, wobei hier die niedrigsten Zustände des Valenzelektrons betrachtet wurden. Die Ionisierungsenergien der superschweren Elemente 113 und 119 bilden den Abschluss dieser Arbeit.
Resumo:
The time dependence of a heavy-ion-atom collision system is solved via a set of coupled channel equations using energy eigenvalues and matrix elements from a self-consistent field relativistic molecular many-electron Dirac-Fock-Slater calculation. Within this independent particle model we give a full many-particle interpretation by performing a small number of single-particle calculations. First results for the P(b) curves for the Ne K-hole excitation for the systems F{^8+} - Ne and F{^6+} - Ne as examples are discussed.
Resumo:
The real-time dynamics of Na_n (n=3-21) cluster multiphoton ionization and fragmentation has been studied in beam experiments applying femtosecond pump-probe techniques in combination with ion and electron spectroscopy. Three dimensional wave packet motions in the trimer Na_3 ground state X and excited state B have been observed. We report the first study of cluster properties (energy, bandwidth and lifetime of intermediate resonances Na_n^*) with femtosecond laser pulses. The observation of four absorption resonances for the cluster Na_8 with different energy widths and different decay patterns is more difficult to interpret by surface plasmon like resonances than by molecular structure and dynamics. Timeresolved fragmentation of cluster ions Na_n^+ indicates that direct photo-induced fragmentation processes are more important at short times than the statistical unimolecular decay.
Resumo:
The problem of the relevance and the usefulness of extracted association rules is of primary importance because, in the majority of cases, real-life databases lead to several thousands association rules with high confidence and among which are many redundancies. Using the closure of the Galois connection, we define two new bases for association rules which union is a generating set for all valid association rules with support and confidence. These bases are characterized using frequent closed itemsets and their generators; they consist of the non-redundant exact and approximate association rules having minimal antecedents and maximal consequences, i.e. the most relevant association rules. Algorithms for extracting these bases are presented and results of experiments carried out on real-life databases show that the proposed bases are useful, and that their generation is not time consuming.
Resumo:
In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.