3 resultados para software markets
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
This thesis critically investigates the divergent international approaches to the legal regulation of the patentability of computer software inventions, with a view to identifying the reforms necessary for a certain, predictable and uniform inter-jurisdictional system of protection. Through a critical analysis of the traditional and contemporary US and European regulatory frameworks of protection for computer software inventions, this thesis demonstrates the confusion and legal uncertainty resulting from ill-defined patent laws and inconsistent patent practices as to the scope of the “patentable subject matter” requirement, further compounded by substantial flaws in the structural configuration of the decision-making procedures within which the patent systems operate. This damaging combination prevents the operation of an accessible and effective Intellectual Property (IP) legal framework of protection for computer software inventions, capable of securing adequate economic returns for inventors whilst preserving the necessary scope for innovation and competition in the field, to the ultimate benefit of society. In exploring the substantive and structural deficiencies in the European and US regulatory frameworks, this thesis develops to ultimately highlight that the best approach to the reform of the legal regulation of software patentability is two-tiered. It demonstrates that any reform to achieve international legal harmony first requires the legislature to individually clarify (Europe) or restate (US) the long-standing inadequate rules governing the scope of software “patentable subject matter”, together with the reorganisation of the unworkable structural configuration of the decision-making procedures. Informed by the critical analysis of the evolution of the “patentable subject matter” requirement for computer software in the US, this thesis particularly considers the potential of the reforms of the European patent system currently underway, to bring about certainty, predictability and uniformity in the legal treatment of computer software inventions.
Resumo:
A growing number of software development projects successfully exhibit a mix of agile and traditional software development methodologies. Many of these mixed methodologies are organization specific and tailored to a specific project. Our objective in this research-in-progress paper is to develop an artifact that can guide the development of such a mixed methodology. Using control theory, we design a process model that provides theoretical guidance to build a portfolio of controls that can support the development of a mixed methodology for software development. Controls, embedded in methods, provide a generalizable and adaptable framework for project managers to develop their mixed methodology specific to the demands of the project. A research methodology is proposed to test the model. Finally, future directions and contributions are discussed.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain