A tenant to tenant migration is a complex and often challenging process. From the very decision of which tenant will become the new host, every step is critical to achieving success. In the case of a tenant to tenant migration, the devil really is in the details.
Considerations around tenant naming convention
In the first part of this series on tenant to tenant migration, I focused on the many considerations prior to starting the migration process. One that I didn’t cover is around the existing tenants’ naming convention. Each tenant already has an individual name so when one tenant moves to the other, they adopt that tenant name.
In some situations, this may not be ideal. For example, a customer might see the tenant name when sharing files, which the company might not want. Or, if the company wants to take a particular marketing approach, they may not want to expose the tenant name — they might want it to be more generic.
In this instance, the two entities might not migrate to one tenant or the other but instead choose to create a third tenant, known as a greenfield tenant. In this scenario, both companies must migrate their data to this new tenant, essentially starting from scratch.
The greenfield approach
While taking a greenfield approach can be more complicated than migrating one tenant to the other, there are some significant benefits to doing so. For example, if a holding company is in the business of acquiring and selling companies, a tenant with a generic name is easier to roll a company into and migrate into their organization.
One of the most important migration activities is properly transitioning email from two domains to one. Ideally, to migrate email from one tenant to the other, we use specific tools to scan the source tenant and create mail-enabled contacts and create coexisting calendars between both tenants.
Unfortunately, this is often a step that becomes complicated by an eager IT person at one of the two organizations. Because email is so critical to a business, it’s one of the first considerations when thinking about a tenant to tenant migration, so that helpful IT person might proactively start creating new email accounts in the target tenant before the users are properly prepared to be migrated over.
While it is always done with good intentions, the problem with this scenario is that it can become a “gotcha” when completing the actual migration. Implementing something like new email addresses without consultation can cause complications for smoothly transitioning one company’s email to the other’s, often leading to a loss in email data, either from the existing account or from the newly created account.
Another common complication is discovering third party integrations during the process. For example, in Outlook, some departments may have implemented third-party add-ins that do certain things to benefit their work. A marketing or finance department may have added in tools to help with invoicing or email distribution. The problem with these add-ins, from a tenant to tenant migration perspective, is that they have the potential to break the client while doing the migration if they’re not addressed properly.
Importance of understanding tenant usage during assessment phase
The best way to avoid these kinds of surprises is to have a good handle on who’s using what within the tenant. While this can be a challenge for large organizations, it’s something that we delve into during the assessment phase of the migration process.
A tenant to tenant migration is much more than a lifting and shifting of applications and data from one cloud to another. It requires precision, attention to detail, and covering every possible scenario.
In Part 3 of my series on a tenant to tenant migration, I covered some of the post-migration considerations as well as how to handle the reverse scenario — a divestiture. You can read part 3 here.
Click here to learn more about the On-Ramp to Microsoft 365: Tenant to Tenant Migration program. Ready to transform your corporate budgeting, planning, reporting & corporate performance management? We can help! With over 30 years of experience helping companies implement and optimize corporate performance management software, our team of experts is here to help analyze your existing processes and recommend the right solution to meet your needs.
Before your organization takes the leap of migrating your data warehouse or your data and analytics platform to the cloud, there are some key elements to consider. Having a good handle on your answers to these four questions will better prepare your team to make the move.
1. Why are you migrating your data and analytics platform to the cloud in the first place?
While this might seem like an obvious question, your answer should cover two things: what is the compelling event or reason for your organization to consider migrating your data to the cloud, and what do you want to achieve once it’s there?
In my experience, the answer to that first piece is widely varied. It might be that your organization’s data center hardware is reaching its end of life, end of support, or even end of lease, prompting a decision of some sort to be made. Sometimes it’s that you have reached a point where you have simply run out of hours in the night to do your data processing using your aging infrastructure. You can’t add more grunt to your hardware, and you can’t add hours to the clock, so you’re left with either buying expensive on-premises hardware or moving to the cloud.
Maybe it’s because you want to get your data closer to your apps that are already living in the cloud. Or perhaps you want to use machine learning and artificial intelligence to do predictive analytics on your data. To do that you need to be in the cloud.
Whatever your reasons, know what’s pushing you there and understand your goal. Having a good grasp on both can help you better articulate your vision to any partners you might have in your migration.
2. Are you considering a lift and shift or a full transformation?
Data migration isn’t always a cut and dried process. There are very legitimate pros and cons to either considering a lift and shift where you will have to try to retrofit improvements later or completing a fundamental redesign of your data, which means you’re starting from the ground up.
For example, a redesign will slow down your process initially, taking longer to get everything up and running. On the other hand, by simply moving what you have up into the cloud, you’re undoubtedly bringing along legacy issues that have been slowing you down. I liken the lift and shift method to packing up an old house and moving it into a new one. Most people find those final few boxes that they simply don’t have the heart to unpack, and that end up sitting in a basement or a garage. They’re probably filled with items that should never have made the move, but now that they have they’re just taking up space.
Whatever you decide, be clear on why you’re making that decision, understand the trade-offs, and go into the process with your eyes open.
3. Where is your IT organization from a skills perspective?
A company migrating its data and analytics platform to the cloud for the first time might make some stark discoveries, including a skills gap within the organization. Skills that are required for managing on-prem data centers differ from doing data analysis in the cloud.
Before you make the move to migrate your data warehouse, take stock of your in-house skill sets. Are there maturity gaps? If so, what’s your plan to bridge any gaps that you might discover? Do you intend to train up? If that’s the case, what time frame are you looking at? Do you need to hire data scientists? If you do, think about what specific skills you require and bring in talent early on in your process, so they have context for your new data environment. Don’t want to reskill or hire? Find a partner that can augment your team with managed services.
Understanding your team’s capabilities and capacity prior to migration will help ease the transition once you start operating in the cloud.
4. What’s your timeline and where do you want to begin?
Every journey starts with a first step, but many times organizations have no clue where to begin when it comes to migrating their datacenter to the cloud.
The best advice is to understand there are experts out there, like Quisitive, who have deep knowledge, great experience, and a proven methodology to help businesses like yours move your data and analytics platform to the cloud and get you where you need to go.
At Quisitive, we’ve developed a proven and prescriptive method for this very purpose: On-Ramp to Azure Data. It provides step-by-step guidance based on best practices and proven cloud adoption methodologies, tools, and resources, to migrate your data and analytics workloads to Azure. In a series of short sprints, it can rapidly move you from planning to use case execution in 30 days.
Data migration doesn’t need to be messy. By walking into the process with a clear vision, a strong plan, and the answers to these four important questions, your migration will roll out much more smoothly.
Click here to learn more about the On-Ramp to Azure Data program.
Ready to transform your corporate budgeting, planning, reporting & corporate performance management? We can help!
With over 30 years of experience helping companies implement and optimize corporate performance management software, our team of experts is here to help analyze your existing processes and recommend the right solution to meet your needs.
AI is becoming more and more pervasive in today’s world, and its footprint is only accelerating. AI has proven to be a valuable tool, but like all tools it can be used for both good and, if we don’t use responsible AI, it can also be used for ill. It may not be hyperbole to compare the current state of AI to the early days of nuclear power – something good (that can provide cheap energy to millions) that can also be used to cause great harm (by building a weapon of mass destruction). In the case of AI, nobody is (yet!) saying that humans are about to be enslaved by our robot overlords, but there are still real cases of harm that can be done by AI systems.
A brief case study
Consider a well-known case – the COMPAS system, which was used by a number of criminal justice agencies to determine reoffending rates. The original article from ProPublica can be found here . This model used an AI algorithm to determine, based on somebody’s demographic information and criminal history, the risk that somebody might reoffend. Consider the following four cases for petty theft and drug possession:
Case 1 – 2 armed robberies, 1 attempted robbery
Case 2 – 4 juvenile misdemeanors
Case 3 – 1 attempted burglary
Case 4 – 1 resisting arrest without violence
Now, most people might think that Cases 1 and 3 would be the most likely to reoffend (and that is actually what happened). But what did the AI algorithm predict?
Case 1 – Low risk (3/10)
Case 2 – High risk (8/10)
Case 3 – Low risk (3/10)
Case 4 – High risk (10/10) So what happened?
In this case, cases 1 and 3 were white people, and cases 2 and 4 were black people. In the data that was used to build the AI algorithm (“the training set”), black people were more likely to be incarcerated than white people, and the AI learned this.
As a result the AI model learned racial bias, because it was not designed responsibly. Now given, these results have been cherry picked as illustrative examples of the problem. However, a study published in Science Advances in 2018 showed the overall accuracy of the model was around 65%, which is comparable to the combined average results of untrained people.
In addition it found that: “Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their black counterparts at 28.0% .”
Given that this algorithm might be used by parole boards, or to decide if someone gets jail time or community sentencing, the potential for genuine harm here is huge.
So what can we do about it?
In the last couple of years, there has been huge research interest in addressing the problems of AI, and how to balance its huge benefits with its potential to cause harm.
Good news – these techniques and technologies are starting to come into wider usage, and this year Microsoft has announced its six principles for Responsible AI
– Fairness
– Inclusiveness
– Safety and Reliability
– Privacy and Security
– Transparency
– Accountability
You can read more about these principles here , and over the next few weeks Catapult’s data science team will be digging into responsible AI’s principles in more detail. We will also show you how to implement some of these yourself using some cool modern python packages!