4 resultados para returns-to-scale

em Universidade do Minho


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Well-dispersed loads of finely powdered metals, metal oxides, several carbon allotropes or nanoclays are incorporated into highly porous polyamide 6 microcapsules in controllable amounts via an original one-step in situ fabrication technique. It is based on activated anionic polymerization (AAP) of ε-caprolactam in a hydrocarbon solvent performed in the presence of the respective micro- or nanosized loads. The forming microcapsules with typical diameters of 25-50 µm entrap up to 40 wt% of load. Their melt processing produces hybrid thermoplastic composites. Mechanical, electric conductivity and magnetic response measurements show that transforming of in situ loaded microcapsules into composites by melt processing (MP) is a facile and rapid method to fabricate materials with high mechanical resistance and electro-magnetic characteristics sufficient for many industrial applications. This novel concept requires low polymerization temperatures, no functionalization or compatibilization of the loads and it is easy to scale up at industrial production levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências Jurídicas (área de especialização em Ciências Jurídicas Públicas).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciências Empresariais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.