Go into your subscription or resource group. In our case we’re going into the resource group containing all our WordPress blog resources.
Next, click Cost Analysis in the left hand menu.
From this view you can select your scope (by default it will be the scope you’re currently in), the grouping, date range, and filtering options.
Here you can see the cost of all the resources in the resource group over the last week.
We’re barely breaking a dollar a day right now, but we have promotional pricing on the Linux app service, which continues through the end of 2019. After the promotional pricing ends the cost will be up to around $40-$50 a month for the app service, doubling our daily cost to a bit over $2 a day.
So how much does it cost to run a “production” WordPress blog on Azure? About $70/month without promotional pricing.
A little over a year ago I was reflecting on my career as a Software Engineer and came to the conclusion that I wanted some more control over what I worked on and wanted to feel the responsibility of being in full control of a project form ideation to delivery. I wasn’t in a position at my day job to walk in and demand full autonomy on a project of my choosing, and I don’t have the financial resources to quit my job unnecessarily to pursue my own project.
Given the constraints I decided to dedicate my free time after work and on weekends to a project that was personal to me.
I have a relatively sensitive stomach, which became even more sensitive over the previous summer. Faced with this, I started using various food tracking apps in the hope of drawing a correlation between what I ate and how my stomach felt.
Some of these apps even attempted to solve the exact problem I faced, but their analysis was deeply flawed, and their reports were full of false conclusions.
Being a Software Engineer I approached the problem the way I normally do. I exported the raw tracking data and built a python script that did simple statistical analysis and exported a spreadsheet with all the foods I’ve tracked and their correlation with good and bad stomach symptoms.
This first report brought me to the realization that the salads I started to eat on a daily basis in order to ease my stomach problems were directly correlated with an increase in stomach issues.
I cut out salad, my stomach improved slightly, and I was determined to explorer this further.
Next, I scoured the internet to find anyone else using the food and symptom tracking app I used. I found a woman willing to run my analysis on her data. She appreciated the spreadsheet I gave to her, but had some trouble deciphering it. Still there was some insightful information she found useful. A few more people were willing to share their exported data, but most struggled with exporting the data and emailing it to me.
There was a technical understanding boundary between me and collecting the data I needed. To continue with my MVP, I needed to build an easy to use app which would provide my algorithms with the data needed to generate reports. Those reports also needed to be presented in an easy to understand way; a spreadsheet with correlation stats was not going to work for most people.
I went down the road of learning Swift and iOS programming. All was going well but it was going to take me several months before I could deliver. These were new technologies to me.
I needed to leverage my core skillset and deliver this faster. I reconsidered my approach. What was I trying to build? A polished iOS app? No. I was trying to build an easy to use interface for logging food and symptoms, and for viewing reports. I could do this much faster by building a mobile website with an Azure cloud powered backend. These were skills I already had.
Within a week I built out the tracking functionality of my application and released it as a beta. The reporting functionality was not complete, but I needed at least a week of tracked data before I could generate useful reports anyways.
On the report section of my application I put up a notice:
“Our algorithms require a week’s data before we can display your results.”
This gave me a week to complete the report processing service.
The reporting engine was complete six days later and on the seventh day I took down the notice and started allowing users to view their reports.
Early users were happy with application, but I struggled to find new users to onboard. Many were not tech savvy and still confused with its usage. One user wrote an angry email asking why my useless app was unable to tell her what foods would give her symptoms BEFORE ever trying them.
I’m a Software Engineer. I have not worked in healthcare. I have not worked extensively with non-tech savvy users. I made the mistake of thinking that if I could build something, the rest would figure itself out. Turns out engineering is not always the hardest problem.
Several users continued to use the application on a regular basis, but the numbers dwindled, and I shut it down three months after launch.
By no means did I consider it a failure, I learned quite a bit, but for now it sits in my GitHub repository gathering dust.
The natural progression of a system will lead it to fill the confines of the structure it is contained in.
Therefore, the first act in architecting a system is defining the structure that contains it.
When a system exists in opposition to its containing structure, this is a problem; the result will be the destruction of the system, container, or both.
Last week I reviewed the design of two different systems. At a high level they both accomplished the same goal, but nonetheless their designs differed.
The first was developed by a single team; it had no clear security, data, or communication boundaries within its implementation. The second, developed by two teams, was composed of two distinct and loosely coupled services which communicated with each other through a single externally developed service.
Their architectures mirrored the developing team’s organizational structure; their designs influenced more by the organization in which they existed, than the problem they solved.
The scenario above is not unique, and has been observed in software for decades (see: Conway’s law.) Yet we rarely expect leadership; the people who dictate an organizations structure, to have the technical ability to architect the systems developed under them.
If you want to improve the design of the systems in your organization, you better invest in making your architects into managers, or your into managers into architects.
I entered the Azure Data Explorer (ADX) world recently after primarily working with Azure Tables, SQL Databases, and Cosmos DBs over the past few years.
When I moved onto ADX I approached it like I was working with the databases I’ve become so familiar with. ADX is not like the databases I had been working with and required that I adjust my though process around working with it.
Querying data in ADX was straight forward enough but getting data into ADX was not. While there are methods of synchronously inserting handfuls of records at a time, these methods are not intended for production use. If you plan on using ADX for individual or small create, update, and delete transactions you’re probably better off looking elsewhere.
Quickly, while we’re on the subject of updating and deleting. For the most part this is not supported in ADX. Tables and databases support retention polices where on insertion to ADX you can specify the time to live of the data being inserted, so if you know you only need the inserted data for a specified amount of time before being cleaned up, than this is natively supported. There are very inefficient methods of deleting a single record, but this is only implemented for GDPR use cases, and should be treated as a last resort. There are also methods of deleting whole blocks of data; you don’t need to worry about this until you get into advanced use of ADX.
Anyways, moving onto what this is really about. Getting your data into ADX.
When thinking about ingestion you first have to decide if you’re going to tell ADX when to ingest your data, direct ingestion, or if you’re going to tell ADX where your data is and let it decide when the best time to ingest is, managed ingestion. For most use cases, if your workload supports the managed model this is the most efficient and recommended method of getting your data into ADX. The managed model allows ADX to handle the distribution, rate of ingestion, and clean-up of ingested data. If you opt for a direct model, you will be forced to handle orchestration of your ingestion to prevent overloading the cluster with ingestion requests, but you will have more control over every aspect of ingestion.
Managed and direct are just the models, which are implemented by a wide range of SDKs. The following are the most common methods of ingestion:
Inline ingestion (Direct Ingestion)
Specify the data to be ingested (inserted) right in the query itself. This method is not intended for production use and is commonly used in the development phase or for small one-time ingestions.
Ingest from query (Direct Ingestion)
Write a data explorer query and ingest the results from that query execution into another table. This can be used across databases and clusters. This is typically used for generating reports and storing them into another table. For best performance the result sets should be less than 1GB, if you need to work with larger sets, batch it into multiple smaller sets.
Ingest from storage (Direct Ingestion)
Tell data explorer where the data is that needs to be ingested; a blob URI for example, and it will pull it in to be ingested.
Queued Ingestion (Managed Ingestion)
Similar to ingest from storage, however with this method you queue up a request with the location of the data to be ingested. This allows the cluster to determine the best time to do ingestion. Once it has capacity it will pull an item from the queue to be ingested. This is the most efficient and best method for ingesting large quantities of data, but could result in longer latency from when the ingestion request is sent to when the data is ready for querying.
Imagine hosting a dinner with all your closest friends.
Before the dinner you had asked everyone to bring their favorite meal. One by one as people arrive you take their meal, and dump it on a single platter in the middle of the table. Once everyone has arrived you all sit down to share this unrecognizable heap of food.
This is the naive approach most companies, schools, and organizations take to cultural inclusivity. The forced inclusion of every member’s culture without a thread of critical thought.
Let’s imagine this dinner party again, this time the host doesn’t take anyone’s preferences into consideration and instead cooks what they believe will be the most enjoyable meal for everyone.
This might be better than an arbitrary heap of food, but still not ideal.
We see this in organizations that impose a pre-defined company culture from the top down. At the very least there is some thought put into it, but it exists without input from most of the people who make up the organization, who are not invested in it given they had no input in its creation.
Again we’ll imagine this dinner party. Like the first example the host asks everyone to bring their favorite meal. Unlike the first example everyone sits down and enjoy’s their own meal.
Not bad right?
In this scenario everyone get’s to enjoy what they like, and possibly even share it with those interested. This example is seen in organizations that encourage individual expression. It’s the best example so far, and requires the least effort on the part of the organization, but it can still be improved upon.
While the previous example can result in increased satisfaction for individual participants in an organization, it does little to grow the organization as a whole. Individuals maintain their identity, but it does little to build their investment in the organization.
Finally, we’ll imagine this dinner one more time, however this time everyone comes with the expectation that they’ll be cooking. They plan ingredients, plan dishes, and work together in the kitchen to create something unique to everyone.
The first dinner may or may not be very good, but it will have aspects of everyone involved, and will get better with each iteration.
This will take trial and error, and will evolve over time as members come and go. It takes more creativity and critical thinking, but what you end up with is an evolving culture unique to the organization and its members. A new culture which develops organically and is inherently inclusive of those involved in its creation.
Organizations are not dinner parties though. As to what this looks like in practice, check back for my next post on this subject.
Shortly after deploying my WordPress site I realized I was having a few issues with HTTPS. I thought it would be as simple as changing the site URL in WordPress and turning on force HTTP in Azure. It was not that simple…
Thankfully there’s not too much you need to do, it took me a few hours to figure it all out, but hopefully this will save you the time I wasted.
Step 1: Enable HTTPS on the Azure App Service
Open your App Service in the Azure portal and enable HTTPS only.
Step 2: Update/Create .htaccess file
Create a .htaccess file in the root directory of your website with the following within it
I’m going to set up a personal blob running WordPress, and hosted on Azure. I have a couple of goals: The site needs to be relatively performant, cost under $50 a month, and have automated deployment from github.
I’ll be writing this post as I go through the process of setting up the website. The last time I set up a wordpress blog was while I was in highschool, so its been over 10 years. I work with Azure daily but most of my recent experience has been with data pipelines and big data systems.
This will be a learning experience for both of us.
Software I’m using for this
VS Code (Mac)
MySQL Workbench https://www.mysql.com/products/workbench/
I’ll only be setting up one environment. Normally I’d have a dev and production environment with automated deployment using ARM templates. However given this is a personal website, we won’t go that far yet.
First step is creating a resource group. Log into portal.azure.com and click the hamburger menu. From there click create a resource. In the search box type resource group.
For this we’ll name it “last-resort-prod” and create it within the US Central region.
A note on resource groups Resource groups can be thought of as folders in Azure. They’re just a logical grouping of resources. The region the resource group is deployed in is not important for most use cases and will not have any impact on the performance of the resources within it.
The resources within it can also be created in regions different than the resource group itself.
Resource groups do store some metadata about the resources within them; so if there is an outage in the region the resource group is deployed to it can prevent you from managing (creating, deleting, updating tags, etc) the resources within it. However the underlying resources will not be affected by the outage if their region is not having an outage.
After the resource group is created we’ll follow the same flow but this time after clicking create resource, type “Web App” in the search box, then create.
I went with linux and selected the lowest price tier which has the always “Always On” setting. Without always on Azure will shut down the app service in periods of downtime. When this happens Azure is forced to start the app service again when a request is received. This can result in 10+ second delays when someone tries to load the site for the first time after a period of not being visted. You can see my settings below.
Finally we’ll create the MySQL DB. There are cheaper options available for MySQL servers hosted on Azure, but I am looking for the easiest to manage option so I will be using Azure Database for MySQL.
Again, click create resource, this time searching for “Azure Database for MySQL,” then click create. Select the resource group you previously created. Give your MySQL server a name. Select your size, I’ll be using the most basic plan with the most limited storage and VCore options. You can always upgrade this later if needed. Here are the settings below (security related fields are blocked out for obvious reasons.)
Now that I have all my Azure resources created I’m going to get started installing WordPress.
First I’ll create a database on the MySQL server for use with WordPress. I’m using MySQLWorkbench as a GUI for managing the MySQL server.
Once MySQLWorkbench is open click add connection.
You’ll need the username and password you used when creating the SQL server. Open the MySQL server in the Azure portal to get the host name and and server admin login.
Use these values in the add connection screen in MySQLWorkbench. Leave the port value as-is. Click test connection to verify the values are correct.
If test connection fails, AKA: How to add a firewall rule for your client If the connection fails the most likely reason is you need to add a firewall rule for yourself. Go back to your MySQL server in the Azure portal and click connection security in the left side menu. From there click Add client IP. This will add your computers IP to the list of allowed IPs. Once your IP is added click save. Test the connection again.
Once you have a connection, open the query window and execute CREATE DATABASE **yourWordPressDbNameHere**;
I will be using github integration to handle my deployments. If you will not be using github you can use a standard FTP client to copy the files over to the wwwroot folder of your app service. The FTP login details can be found in the publish profile for your app service. To download the publish profile open your app service in the Azure portal and click the “Get publish profile” button:
Before starting this I created a repository for my website on github. After that I used the github client to clone the repo to my local machine. Once cloned I copied the WordPress installation files into the root directory of my git repo and pushed back to github.
Setting up automated deployment within the Azure portal is very easy. Navigate back to your App Service and click “Deployment Center” from the menu. Once there select github as your source control, click next, click App Service build service, click next, navigate to your repo using the drop downs, then next, and finish. You now have automated deployment set up. Wait a few minutes and refresh the page to ensure a valid comit was pulled and deployed successfully.
Once there is a successful deployment open your app service within the Azure portal and click browse. This will navigate you to your app services webpage, which should redirect to the WordPress installation page.
Follow the prompts for the intallation. When setting up the db connection use the same host, username, and password values you used to connect to the DB earlier. Alternatively, for a more secure option, create a new user with access only to the WordPRess database you created, and use that for the setup.
Click continue. If the connection failes it is most likely due to a MySQL firewall issue, like we experienced earlier. To fix this, navigate back to the firewall rules page, and turn on “Allow access to Azure services.”
After updating the firewall rule the installation should run successfully. You can then create a wordpress user and finish the setup.
You now have a functioning WordPress installation on Azure!
Let me know if you have any questions in the comments, I’d love to work through some problems with you. 🙂