917 resultados para STACKING-FAULTS
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Työssä tarkasteltiin sähköisiä tarkastusmenetelmiä oikosulkumoottorin ennaltaehkäisevälle kunnonvalvonnalle sekä näiden menetelmien toimivuutta eri vikojen havaitsemiseen. Työ to-teutettiin Porvoon Energian Tolkkisten voimalaitoksella ja se toimii samalla tarkastusohjeena työssä esitetyille tarkastusmenetelmille. Käytetyillä tarkastusmenetelmillä kyettiin havaitse-maan osa vioista ja niitä voidaan käyttää osana ennaltaehkäisevää kunnonvalvontaa. Kaikkia vikoja ei kuitenkaan työssä esitetyillä menetelmillä voitu havaita.
Resumo:
Työn tavoitteena oli suunnitella vetokoneen linjalle pinoamislaite. Laitteen tehtävänä on pinota yksittäin vetokoneelta tulevista tuotteista asiakasvaatimusten mukaisia pinoja. Laitteen suunnittelu toteutettiin käyttämällä systemaattisen koneensuunnittelun metodia. Diplomityö rajattiin koskemaan laitteen mekaanisten ratkaisujen suunnittelua. Suunnittelussa huomioitiin linjan asettamat vaatimukset, jotka koskevat sekä tuotantomäärää että tilojen käyttöä. Laitteen suunnittelussa ja mallinnuksessa käytettiin Solidworksin 3D-mallinnusohjelmaa. Mallinnettuja ratkaisuvaihtoehtoja verrattiin keskenään. Parhaiten vaatimusluettelossa esitetyt vaatimukset täyttänyt ratkaisumalli valittiin kehitettäväksi. Suunnitellusta laitteesta valmistettiin työpiirustukset sekä kokoonpanokuvat.
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.
Resumo:
Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
This study was designed to assess the influence of resistance training on salivary immunoglobulin A (IgA) levels and hormone profile in sedentary adults with Down syndrome (DS). A total of 40 male adults with DS were recruited for the trial through different community support groups for people with intellectual disabilities. All participants had medical approval for participation in physical activity. Twenty-four adults were randomly assigned to perform resistance training in a circuit with six stations, 3 days per week for 12 weeks. Training intensity was based on functioning in the eight-repetition maximum (8RM) test for each exercise. The control group included 16 age-, gender-, and BMI-matched adults with DS. Salivary IgA, testosterone, and cortisol levels were measured by ELISA. Work task performance was assessed using the repetitive weighted-box-stacking test. Resistance training significantly increased salivary IgA concentration (P=0.0120; d=0.94) and testosterone levels (P=0.0088; d=1.57) in the exercising group. Furthermore, it also improved work task performance. No changes were seen in the controls who had not exercised. In conclusion, a short-term resistance training protocol improved mucosal immunity response as well as salivary testosterone levels in sedentary adults with DS.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Tämän diplomityön tarkoituksena oli kehittää teollisuusyrityksen toimittajahallintaa suorituskykymittauksen avulla. Työssä suunnitellaan ja toteutetaan toimittajien suorituskykymittaristo teoriatiedon avulla. Projektin lopputuloksena case-yrityksellä on käytössään kokonaisvaltainen toimittajien suorituskykymittaristo ja kehittyneet toimittajahallinnan prosessit. Työn teoriaosuudessa syvennytään erilaisiin suorituskykymittariston kehittämisen viitekehyksiin. Lisäksi teoriaosuudessa käsitellään hyvän suorituskykymittariston vaatimuksia ja suositeltuja mittauksen seuranta-alueita. Työn empiriaosuudessa rakennetaan suorituskykymittaristo vaihe vaiheelta perustuen tapaustutkimukseen. Kehitettyä suorituskykymittaristoa testattiin simulaatiovaiheen aikana case-yrityksen kolmella tehtaalla. Käyttäjäpalautteen perusteella mittaristo täytti teoriaosuudessa löydetyt tärkeimmät vaatimukset suorituskykymittaristolle. Mittaristo on helppokäyttöinen, visuaalinen ja sen raportoinnista saa arvokasta informaatiota toimittajien arviointiin. Diplomityön konkreettisia hyötyjä case-yritykselle ovat olleet laatupoikkeamien kehittynyt havaitseminen ja toimittajan hintakilpailukyvyn parantunut arviointi. Kaikkein merkittävin hyöty case-yritykselle on kuitenkin ollut systemaattinen toimittaja-arviointiprosessi, minkä suorituskykymittaristo mahdollistaa.
Resumo:
Korjauspalveluissa aikaa vieviä tapauksia ovat mikropiirien vaikeasti paikannettavat viat. Tällaista vianetsintää varten yrityksemme oli ostanut Polar Fault Locator 780 –mittalaitteen, jolla voidaan mitata mikropiirien toimintaa käyttämällä analogista tunnisteanalyysiä. Diplomityön tavoitteena oli selvittää, miten mittaustapaa voidaan käyttää korjauspalveluissa. Tutkintaa lähestyttiin joidenkin tyypillisten komponenttien näkökulmasta, mutta pääpaino oli mikropiireissä. Joitain mikropiirejä vaurioitettiin tahallisesti, jolloin mittaustulokset uusittiin ja tutkittiin miten vaurioituminen näkyy mittaustuloksissa. Tutkimusmenetelmänä oli kirjallisuus ja empiirinen kokeellisuus. Diplomityön tuloksena oli, että tätä mittaustapaa käyttämällä mikropiirien kuntoa voidaan tutkia. Ongelmiksi osoittautuivat alkuperäinen oletus mittalaitteen tuloksien tulkinnasta ja taustamateriaalin heikko saatavuus. Täten mittalaite parhaiten soveltuu tilanteisiin, joissa sen antamia tuloksia verrataan suoraan toisen toimivaksi tunnetun yksikön mittaustuloksiin. Vaurioitettaessa komponenteissa oli kuitenkin havaittavissa selvä poikkeavuus.
Resumo:
Tässä diplomityössä määritettiin Nesteen Suomen jalostamoiden normaalitoiminta ja integroidun päästöjen hallintamallin mukainen monitorointi. Työn ensimmäisenä tavoitteena oli selvittää määritelmä integroidun päästöjen hallintamallin mukaiselle normaalitoiminnalle öljynjalostamolla. Toisena tavoitteena oli löytää ratkaisut integroidun päästöjen hallintamallin monitoroinnin heikosti ohjeistettuihin kohtiin. Jalostamoiden normaalitoiminnan määrittäviä kohteita selvitettiin keskustelemalla jalostamon käyttöhenkilökunnan kanssa kriittisistä kohteista päästöjenhallinnan sekä jalostamokokonaisuuden toiminnan kannalta. Keskusteluissa esiin tulleiden kohteiden hyödyllisyyttä jalostamon normaalitoiminnan määrittämiseksi tutkittiin Nesteen jalostamoiden automaatiojärjestelmän historiatietojen perusteella. Selkeästi normaalista poikkeavaksi toiminnaksi lisääntyneiden päästöjen takia tunnistettiin polttoaineiden ja savukaasujen puhdistusprosessien häiriöt sekä Porvoon jalostamolla voimalaitoksen höyryntuotantohäiriöt. Lisäksi jalostamon normaalitoiminnan kuvaajaksi tunnistettiin toiminnassa olevien yksiköiden määrä. Monitoroinnin ohjeistuksen puutteisiin etsittiin ratkaisua olemassa olevista muiden soveltamisalojen ohjeistuksista. Integroidun päästöjen hallintamallin monitorointitarpeista tunnistettiin paljon samaa päästökaupan monitoroinnin kanssa. Monitoroinnin lisäohjeistuksena suositeltiin käyttämään päästökaupan monitorointiohjetta ja sieltä löytyviä määräyksiä. Integroidussa päästöjen hallintamallissa tunnistettiin uusia haasteita viranomaisseurannan toteuttamisessa.
Resumo:
Energy generation industry is very capital-intensive industry. Productivity and availability requirements have increased while competition and quality requirement have increased. Maintenance has a significant role that these requirements can be reached. Even maintenance is much more than repairing faults nowadays, spare parts are important part of maintenance. Large power boilers are user-specific therefore features of boilers vary from project to project. Equipment have been designed to follow the customer’s requirements therefore spare parts are mainly user-specific also. The study starts with literature review introducing maintenance, failure mechanisms, and systems and equipment of bubbling fluidized bed boiler. At the final part spare part management is discussed from boiler technology point of view. For this part of the study science publications about spare part management are utilized also some specialist from a boiler technology company and other original equipment manufacturers were interviewed. Spare part management is challenging from the boiler supplier point of view and the end user of spare parts has a responsibility of stocking items. Criticality analysis can be used for finding most critical devices of the process and spare part management shall focus to those items. Spare parts are part of risk management. Stocking spare parts is increasing costs but then high spare part availability is decreasing delay time caused by fault of item.
Resumo:
Polarized reflectance measurements of the quasi I-D charge-transfer salt (TMTSFh CI04 were carried out using a Martin-Puplett-type polarizing interferometer and a 3He refrigerator cryostat, at several temperatures between 0.45 K and 26 K, in the far infrared, in the 10 to 70 cm- 1 frequency range. Bis-tetramethyl-tetraselena-fulvalene perchlorate crystals, grown electrochemically and supplied by K. Behnia, of dimensions 2 to 4 by 0.4 by 0.2 mm, were assembled on a flat surface to form a mosaic of 1.5 by 3 mm. The needle shaped crystals were positioned parallel to each other along their long axis, which is the stacking direction of the planar TMTSF cations, exposing the ab plane face (parallel to which the sheets of CI04 anions are positioned). Reflectance measurements were performed with radiation polarized along the stacking direction in the sample. Measurements were carried out following either a fast (15-20 K per minute) or slow (0.1 K per minute) cooling of the sample. Slow cooling permits the anions to order near 24 K, and the sample is expected to be superconducting below 1.2 K, while fast cooling yields an insulating state at low temperatures. Upon the slow cooling the reflectance shows dependence with temperature and exhibits the 28 cm- 1 feature reported previously [1]. Thermoreflectance for both the 'slow' and 'fast' cooling of the sample calculated relative to the 26 K reflectance data indicates that the reflectance is temperature dependent, for the slow cooling case only. A low frequency edge in the absolute reflectance is assigned an electronic origin given its strong temperature dependence in the relaxed state. We attribute the peak in the absolute reflectance near 30 cm-1 to a phonon coupled to the electronic background. Both the low frequency edge and the 30 cm-1 feature are noted te shift towards higher frequcncy, upon cntering the superconducting state, by an amount of the order of the expected superconducting energy gap. Kramers-Kronig analysis was carried out to determine the optical conductivity for the slowly cooled sample from the measured reflectance. In order to do so the low frequency data was extrapolated to zero frequency using a Hagen-Ru bens behaviour, and the high frequency data was extended with the data of Cao et al. [2], and Kikuchi et al. [3]. The real part of the optical conductivity exhibits an asymmetric peak at 35 cm-1, and its background at lower frequencies seems to be losing spectral weight with lowering of the temperature, leading us to presume that a narrow peak is forming at even lower frequencies.
Resumo:
The Sand Creek Prospect is located within the eastern exposed margin of the Coast Plutonic Complex. The occurrence is a plug and dyke porphyry molybdenum deposit. The rock types, listed in decreasing age: 1) metamorphlc schists and gneisses; 2) diorite suite rocks - diorite, quartz diorite, tonalite; 3) rocks of andesitic composition; 4) granodiorites, coarse porphyritic granodiorite, quartzfeldspar porphyry, feldspar porphyry; and 5) lamprophyre. Hydrothermal alteration is known to have resulted from emplacement of the hornblende-feldspar porphyry through to the quartz-feldspar porphyry. Molybdenum mineralization is chiefly associated with the quartz-feldspar porphyry. Ore mineralogy is dominated by pyrite with subordinate molybdenite, chalcopyrite, covelline, sphalerite, galena, scheelite, cassiterite and wolframite. Molybdenite exhibits a textural gradation outward from the quartz-feldspar porphyry. That is, disseminated rosettes and rosettes in quartz veins to fine-grained molybdenite in quartz veins and potassic altered fractures to fine-grained molybdenite paint or 6mears in the peripheral zones. The quartz-feldspar porphyry dykes were emplaced in an inhomogeneous stress field. The trend of dykes, faults and shear zones is 0^1° to 063° and dips between 58° NW and 86* SE. Joint Pole distribution reflects this fault orientation. These late deformatior maxima are probably superimposed upon annuli representing diapiric emplacement of the plutons. A model of emplacement involving two magmatic pulses is given in the following sequence: Diorite pulse (i) dioritequartz diorite, (ii) tonalites; granodiorite pulse (iii) hornblende-fildspar microporphyry, hornblende/biotite porphyry, (iv) coarse grained granodiorite, (v) quartz-feldspar porphyry, (vi) feldspar porphyry, and (vii) lamprophyre. The combination of plutonic and coarse porphyritic textures, extensive propylitic overprinting of potassic alteration assemblages suggests that the. prospect represents the lower reaches of a porphyry system.