12 resultados para software component
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
The objective of this paper is to investigate the effect of the pad size ratio between the chip and board end of a solder joint on the shape of that solder joint in combination with the solder volume available. The shape of the solder joint is correlated to its reliability and thus of importance. For low density chip bond pad applications Flip Chip (FC) manufacturing costs can be kept down by using larger size board pads suitable for solder application. By using “Surface Evolver” software package the solder joint shapes associated with different size/shape solder preforms and chip/board pad ratios are predicted. In this case a so called Flip-Chip Over Hole (FCOH) assembly format has been used. Assembly trials involved the deposition of lead-free 99.3Sn0.7Cu solder on the board side, followed by reflow, an underfill process and back die encapsulation. During the assembly work pad off-sets occurred that have been taken into account for the Surface Evolver solder joint shape prediction and accurately matched the real assembly. Overall, good correlation was found between the simulated solder joint shape and the actual fabricated solder joint shapes. Solder preforms were found to exhibit better control over the solder volume. Reflow simulation of commercially available solder preform volumes suggests that for a fixed stand-off height and chip-board pad ratio, the solder volume value and the surface tension determines the shape of the joint.
Resumo:
A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.
Resumo:
The aim of this study was to develop a methodology, based on satellite remote sensing, to estimate the vegetation Start of Season (SOS) across the whole island of Ireland on an annual basis. This growing body of research is known as Land Surface Phenology (LSP) monitoring. The SOS was estimated for each year from a 7-year time series of 10-day composited, 1.2 km reduced resolution MERIS Global Vegetation Index (MGVI) data from 2003 to 2009, using the time series analysis software, TIMESAT. The selection of a 10-day composite period was guided by in-situ observations of leaf unfolding and cloud cover at representative point locations on the island. The MGVI time series was smoothed and the SOS metric extracted at a point corresponding to 20% of the seasonal MGVI amplitude. The SOS metric was extracted on a per pixel basis and gridded for national scale coverage. There were consistent spatial patterns in the SOS grids which were replicated on an annual basis and were qualitatively linked to variation in landcover. Analysis revealed that three statistically separable groups of CORINE Land Cover (CLC) classes could be derived from differences in the SOS, namely agricultural and forest land cover types, peat bogs, and natural and semi-natural vegetation types. These groups demonstrated that managed vegetation, e.g. pastures has a significantly earlier SOS than in unmanaged vegetation e.g. natural grasslands. There was also interannual spatio-temporal variability in the SOS. Such variability was highlighted in a series of anomaly grids showing variation from the 7-year mean SOS. An initial climate analysis indicated that an anomalously cold winter and spring in 2005/2006, linked to a negative North Atlantic Oscillation index value, delayed the 2006 SOS countrywide, while in other years the SOS anomalies showed more complex variation. A correlation study using air temperature as a climate variable revealed the spatial complexity of the air temperature-SOS relationship across the Republic of Ireland as the timing of maximum correlation varied from November to April depending on location. The SOS was found to occur earlier due to warmer winters in the Southeast while it was later with warmer winters in the Northwest. The inverse pattern emerged in the spatial patterns of the spring correlates. This contrasting pattern would appear to be linked to vegetation management as arable cropping is typically practiced in the southeast while there is mixed agriculture and mostly pastures to the west. Therefore, land use as well as air temperature appears to be an important determinant of national scale patterns in the SOS. The TIMESAT tool formed a crucial component of the estimation of SOS across the country in all seven years as it minimised the negative impact of noise and data dropouts in the MGVI time series by applying a smoothing algorithm. The extracted SOS metric was sensitive to temporal and spatial variation in land surface vegetation seasonality while the spatial patterns in the gridded SOS estimates aligned with those in landcover type. The methodology can be extended for a longer time series of FAPAR as MERIS will be replaced by the ESA Sentinel mission in 2013, while the availability of full resolution (300m) MERIS FAPAR and equivalent sensor products holds the possibility of monitoring finer scale seasonality variation. This study has shown the utility of the SOS metric as an indicator of spatiotemporal variability in vegetation phenology, as well as a correlate of other environmental variables such as air temperature. However, the satellite-based method is not seen as a replacement of ground-based observations, but rather as a complementary approach to studying vegetation phenology at the national scale. In future, the method can be extended to extract other metrics of the seasonal cycle in order to gain a more comprehensive view of seasonal vegetation development.
Resumo:
This thesis is focused on the design and development of an integrated magnetic (IM) structure for use in high-power high-current power converters employed in renewable energy applications. These applications require low-cost, high efficiency and high-power density magnetic components and the use of IM structures can help achieve this goal. A novel CCTT-core split-winding integrated magnetic (CCTT IM) is presented in this thesis. This IM is optimized for use in high-power dc-dc converters. The CCTT IM design is an evolution of the traditional EE-core integrated magnetic (EE IM). The CCTT IM structure uses a split-winding configuration allowing for the reduction of external leakage inductance, which is a problem for many traditional IM designs, such as the EE IM. Magnetic poles are incorporated to help shape and contain the leakage flux within the core window. These magnetic poles have the added benefit of minimizing the winding power loss due to the airgap fringing flux as they shape the fringing flux away from the split-windings. A CCTT IM reluctance model is developed which uses fringing equations to accurately predict the most probable regions of fringing flux around the pole and winding sections of the device. This helps in the development of a more accurate model as it predicts the dc and ac inductance of the component. A CCTT IM design algorithm is developed which relies heavily on the reluctance model of the CCTT IM. The design algorithm is implemented using the mathematical software tool Mathematica. This algorithm is modular in structure and allows for the quick and easy design and prototyping of the CCTT IM. The algorithm allows for the investigation of the CCTT IM boxed volume with the variation of input current ripple, for different power ranges, magnetic materials and frequencies. A high-power 72 kW CCTT IM prototype is designed and developed for use in an automotive fuelcell-based drivetrain. The CCTT IM design algorithm is initially used to design the component while 3D and 2D finite element analysis (FEA) software is used to optimize the design. Low-cost and low-power loss ferrite 3C92 is used for its construction, and when combined with a low number of turns results in a very efficient design. A paper analysis is undertaken which compares the performance of the high-power CCTT IM design with that of two discrete inductors used in a two-phase (2L) interleaved converter. The 2L option consists of two discrete inductors constructed from high dc-bias material. Both topologies are designed for the same worst-case phase current ripple conditions and this ensures a like-for-like comparison. The comparison indicates that the total magnetic component boxed volume of both converters is similar while the CCTT IM has significantly lower power loss. Experimental results for the 72 kW, (155 V dc, 465 A dc input, 420 V dc output) prototype validate the CCTT IM concept where the component is shown to be 99.7 % efficient. The high-power experimental testing was conducted at General Motors advanced technology center in Torrence, Los Angeles. Calorific testing was used to determine the power loss in the CCTT IM component. Experimental 3.8 kW results and a 3.8 kW prototype compare and contrast the ferrite CCTT IM and high dc-bias 2L concepts over the typical operating range of a fuelcell under like-for-like conditions. The CCTT IM is shown to perform better than the 2L option over the entire power range. An 8 kW ferrite CCTT IM prototype is developed for use in photovoltaic (PV) applications. The CCTT IM is used in a boost pre-regulator as part of the PV power stage. The CCTT IM is compared with an industry standard 2L converter consisting of two discrete ferrite toroidal inductors. The magnetic components are compared for the same worst-case phase current ripple and the experimental testing is conducted over the operation of a PV panel. The prototype CCTT IM allows for a 50 % reduction in total boxed volume and mass in comparison to the baseline 2L option, while showing increased efficiency.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
This thesis critically investigates the divergent international approaches to the legal regulation of the patentability of computer software inventions, with a view to identifying the reforms necessary for a certain, predictable and uniform inter-jurisdictional system of protection. Through a critical analysis of the traditional and contemporary US and European regulatory frameworks of protection for computer software inventions, this thesis demonstrates the confusion and legal uncertainty resulting from ill-defined patent laws and inconsistent patent practices as to the scope of the “patentable subject matter” requirement, further compounded by substantial flaws in the structural configuration of the decision-making procedures within which the patent systems operate. This damaging combination prevents the operation of an accessible and effective Intellectual Property (IP) legal framework of protection for computer software inventions, capable of securing adequate economic returns for inventors whilst preserving the necessary scope for innovation and competition in the field, to the ultimate benefit of society. In exploring the substantive and structural deficiencies in the European and US regulatory frameworks, this thesis develops to ultimately highlight that the best approach to the reform of the legal regulation of software patentability is two-tiered. It demonstrates that any reform to achieve international legal harmony first requires the legislature to individually clarify (Europe) or restate (US) the long-standing inadequate rules governing the scope of software “patentable subject matter”, together with the reorganisation of the unworkable structural configuration of the decision-making procedures. Informed by the critical analysis of the evolution of the “patentable subject matter” requirement for computer software in the US, this thesis particularly considers the potential of the reforms of the European patent system currently underway, to bring about certainty, predictability and uniformity in the legal treatment of computer software inventions.
Resumo:
The present study aimed to investigate interactions of components in the high solids systems during storage. The systems included (i) lactose–maltodextrin (MD) with various dextrose equivalents at different mixing ratios, (ii) whey protein isolate (WPI)–oil [olive oil (OO) or sunflower oil (SO)] at 75:25 ratio, and (iii) WPI–oil– {glucose (G)–fructose (F) 1:1 syrup [70% (w/w) total solids]} at a component ratio of 45:15:40. Crystallization of lactose was delayed and increasingly inhibited with increasing MD contents and higher DE values (small molecular size or low molecular weight), although all systems showed similar glass transition temperatures at each aw. The water sorption isotherms of non-crystalline lactose and lactose–MD (0.11 to 0.76 aw) could be derived from the sum of sorbed water contents of individual amorphous components. The GAB equation was fitted to data of all non-crystalline systems. The protein–oil and protein–oil–sugar materials showed maximum protein oxidation and disulfide bonding at 2 weeks of storage at 20 and 40°C. The WPI–OO showed denaturation and preaggregation of proteins during storage at both temperatures. The presence of G–F in WPI–oil increased Tonset and Tpeak of protein aggregation, and oxidative damage of the protein during storage, especially in systems with a higher level of unsaturated fatty acids. Lipid oxidation and glycation products in the systems containing sugar promoted oxidation of proteins, increased changes in protein conformation and aggregation of proteins, and resulted in insolubility of solids or increased hydrophobicity concomitantly with hardening of structure, covalent crosslinking of proteins, and formation of stable polymerized solids, especially after storage at 40°C. We found protein hydration transitions preceding denaturation transitions in all high protein systems and also the glass transition of confined water in protein systems using dynamic mechanical analysis.
Resumo:
Cloud services provide its users with flexible resource provisioning. But in the current market, a user has to choose from a limited set of configurations at a fixed price. This paper presents an autonomous negotiation system termed CloudNeg for negotiating cloud services. CloudNeg provides buyers and sellers of cloud services with autonomous agents to negotiate on the specifications of a cloud instance, including price, on their behalf. These agents elicit their buyers’ time preferences and use them in negotiations. Further, this paper presents two artifacts: a negotiation algorithm and a prototype which together form CloudNeg.
Resumo:
A growing number of software development projects successfully exhibit a mix of agile and traditional software development methodologies. Many of these mixed methodologies are organization specific and tailored to a specific project. Our objective in this research-in-progress paper is to develop an artifact that can guide the development of such a mixed methodology. Using control theory, we design a process model that provides theoretical guidance to build a portfolio of controls that can support the development of a mixed methodology for software development. Controls, embedded in methods, provide a generalizable and adaptable framework for project managers to develop their mixed methodology specific to the demands of the project. A research methodology is proposed to test the model. Finally, future directions and contributions are discussed.
Resumo:
In a landmark book published in 2000, the sociologist Danièle Hervieu-Léger defined religion as a chain of memory, by which she meant that within religious communities remembered traditions are transmitted with an overpowering authority from generation to generation. After analysing Hervieu-Léger’s sociological approach as overcoming the dichotomy between substantive and functional definitions, this article compares a ritual honouring the ancestors in which a medium becomes possessed by the senior elder’s ancestor spirit among the Shona of Zimbabwe with a cleansing ritual performed by a Celtic shaman in New Hampshire, USA. In both instances, despite different social and historical contexts, appeals are made to an authoritative tradition to legitimize the rituals performed. This lends support to the claim that the authoritative transmission of a remembered tradition, by exercising an overwhelming power over communities, even if the memory of such a tradition is merely postulated, identifies the necessary and essential component for any activity to be labelled “religious”.
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain