Posts categorized “architecture”
The vast majority of all traffic on the Internet is encrypted. It took almost 20 years to get to about 40 percent encrypted, and then we jumped another 40 percent plus in just the last four years (source: Google Transparency Report).
With the proliferation of mobile and web applications, latency has certainly become a huge deal on the Internet of today. The Internet is a very competitive environment, and users are becoming more and more content hungry and certainly impatient with latency.
Rackspace has recently released the second generation of the Intel Xeon scalable architecture processor family, code-named Cascade Lake.
Oracle® GoldenGate® supports two architectures: the classic architecture and the Oracle GoldenGate Microservices Architecture (OGG MA).
Classic architecture has standard
and is managed by the GoldenGate Software Command Interpreter (GGSCI).
OGG MA is a restful application program interface (API) microservices-based architecture that enables you to install, configure, monitor, and manage Oracle GoldenGate services through a web-based user interface. OGG MA was introduced in version GoldenGate 12.3 and was designed from the perspective of cloud operations.
The thirteen tips to take tech tests
Yes I can speak Cloud...I learned it from a book.
A proxy server is a computer system that sits between the client that requests a web document and the target server (another computer system) that serves the document. In its simplest form, a proxy server facilitates communication between the client and the target server without modifying requests or replies.
Businesses compete to transform digitally, but most are restricted in some way or another from moving over to the cloud or to a new data center by existing applications or infrastructure. Docker® comes to rescue and enables the independence of applications and infrastructure. It is the only container platform that addresses every application across the hybrid cloud.
This blog provides insights into the Docker architecture and key features so that you can get started with these migration activities and explains why you might want to use Docker.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. Part 1 of this two-part blog post series discussed the introduction of multitenant architecture in Oracle Database 12c and how to use Data Pump to export and import data. Part 2 covers how to take an export of only pluggable databases (PDBs) and the restrictions that Data Pump places on PDBs.
Originally published by TriCore: June 6, 2017
Oracle® Data Pump (expdp, impdp) is a utility for exporting and importing database objects in and across databases. While most database administrators are aware of Data Pump, support for multitenant architecture in Oracle Database 12c introduced changes to how Data Pump exports and imports data.
The Threat and Vulnerability Analysis team at Rackspace is charged with providing internal vulnerability scanning, penetration testing, and red/purple teaming capabilities to reduce cyber-based threats, risk, and exposure for the company. One of our tasks, as part of meeting certain compliance objectives, is to ensure systems are not exposed from various networking "perspectives" without going through a bastion first.
This blog post explores the basics of Oracle® GoldenGate® and its functions. Because it's decoupled from the database architecture, GoldenGate facilitates heterogeneous and homogeneous real-time capture of transactional change data capture and integration.
This post describes the Oracle® In-Memory Advisor (IMA), a feature of Database 12c, and describes its benefits. This feature is available in Oracle Database version 220.127.116.11 and later.
Originally published by Tricore: July 11, 2017
In Part 1 of this two-part series on Apache™ Hadoop®, we introduced the Hadoop ecosystem and the Hadoop framework. In Part 2, we cover more core components of the Hadoop framework, including those for querying, external integration, data exchange, coordination, and management. We also introduce a module that monitors Hadoop clusters.
Originally published by Tricore: July 10, 2017
Apache™ Hadoop® is an open source, Java-based framework that's designed to process huge amounts of data in a distributed computing environment. Doug Cutting and Mike Cafarella developed Hadoop, which was released in 2005.
Built on commodity hardware, Hadoop works on the basic assumption that hardware failures are common. The Hadoop framework addresses these failures.
In Part 1 of this two-part blog series, we'll cover big data, the Hadoop ecosystem, and some key components of the Hadoop framework.
This blog gives an overview of the non-relational database, Apache Cassandra™. It discusses its components and provides an understanding of how the database operates and manages data.
Parallel Replicat is one of the new features introduced in Oracle ® GoldenGate 12c Release 3 (18.104.22.168). Parallel Replicat is designed to help users to quickly load data into their environments by using multiple parallel mappers and threads.
This blog discusses the Oracle Exadata Smart Flash Cache feature and its architecture, including the write-back flash cache feature.
Where do you conduct your User Acceptance Testing (UAT) activities? It's a loaded question that many organizations have challenges addressing as they first need to obtain a clear definition of what what UAT is (and what it isn't) before they even consider where UAT activities should occur. The benefits of a properly instituted UAT environment far outweigh the challenges, and the danger of not having one, but success requires a thoughtful and purposeful approach.
Sitecore implementations with Content Delivery nodes in multiple locations must keep their databases and content in sync. The Sitecore Scaling Guide summarizes areas of concern, such as isolating CM and CD servers, enabling the Sitecore scalability settings, maintaining search indexes, etc. Sitecore runs on top of SQL Server, and one topic touched on in the Scaling Guide is SQL Server replication, and conveniently there is a Sitecore guide just for that specific subject. This guide explains how, with SQL Server Merge Replication, one can coordinate the content of Sitecore databases that are not in the same location. This is the starting point for what we at Rackspace have found to be a global publishing architecture that meets the needs of enterprise Sitecore customers.
Before getting into the nuts and bolts of the load balancing architecture itself, it's important to understand the (typical) multiple tiers of an E-Commerce application framework:
- Firewall (edge)
- Physical local traffic manager (LTM)
- Web Server
- Application Server
- Database Server (cluster)
Keep in mind that, top to bottom, the environment will be asymmetrical from a load perspective. For example, a single web server will typically be capable of 2-3x the number of concurrent connections as a single application server; heavily dependent on cache density - higher density will shift more load up into the web tier. Caching will be a subject for a later discussion, but at a glance should account for 80+ percent of content served. With room for variance, the majority of successful architectures achieve this metric and those that struggle tend to miss. This is not to say, of course, that a lower density will necessarily have difficulties. In addition to relocating load away from application servers, a higher cache density opens an opportunity for external services, such as Akamai CDN, to absorb load ahead of ever reaching the environment.
What is MongoDB?
MongoDB is, among other things, a document-oriented NoSQL database. This means that it deviates from the traditional, relational model to present a flexible, horizontally scaling model for data management and organization.
How does MongoDB work with AEM?
MongoDB integrates with Adobe Experience Manager (AEM) by means of the crx3mongo runmode and JVM options: -Doak.mongo.uri and -Doak.mongo.db
Why would I MongoDB?
Primarily MongoDB provides an alternate HA configuration to the older CRX cluster configuration. In reality, the architecture is more similar to a shared catalog on NFS or to NetApp than true clustering. The authors and publishers using MongoDB are not necessarily aware of each other.
If you are an OpenStack contributor, you likely rely on DevStack for most of your work. DevStack is, and has been for a long time, the de-facto platform that contributors use for development, testing, and reviews. In this article, I want to introduce you to a project I'm a contributor to, called openstack-ansible. For the last few months, I have been using this project as an alternative to DevStack for OpenStack upstream development, and the experience has been very positive.
I - Introduction
This is the first of a two-part series that demonstrates a pain-free solution a developer could use to transition code from laptop to production. The fictional deployment scenario depicted in this post is one method that can significantly reduce operational overhead on the developer. This series will make use of technologies such as Git, Docker, Elastic Beanstalk, and other standard tools.
Container technology is evolving at a very rapid pace. The purpose of the webinar talk in this post is to describe the current state of container technologies within the OpenStack Ecosystem. Topics we will cover include:
- How OpenStack vendors and operators are using containers to create efficiencies in deployment of the control plane services
- Approaches OpenStack consumers are taking to deploy container-based applications on OpenStack clouds
Architecting applications for a cloud environment usually means treating each cloud server as ephemeral. If you destroy the cloud server, the data is destroyed with it. But, you still need a way to persist data. Cloud block storage has typically been that solution. Attach cloud block storage to a cloud server, save your data within that cloud block device, and when/if the cloud server is destroyed, your data persists and can be re-attached to another cloud server.
The IPython/Jupyter notebook is a wonderful environment for computations, prose, plots, and interactive widgets that you can share with collaborators. People use the notebook all over the place across many varied languages. It gets used by data scientists, researchers, analysts, developers, and people in between.