{"id":29830,"date":"2025-11-04T12:59:46","date_gmt":"2025-11-04T07:29:46","guid":{"rendered":"https:\/\/opstree.com\/blog\/?p=29830"},"modified":"2025-11-04T12:59:46","modified_gmt":"2025-11-04T07:29:46","slug":"aws-and-azure-outages","status":"publish","type":"post","link":"https:\/\/opstree.com\/blog\/2025\/11\/04\/aws-and-azure-outages\/","title":{"rendered":"Complete Case Study On The AWS and Azure Outages Of October 2025"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">October 2025 is still tough in cloud computing, as Amazon Web Services and Microsoft Azure two major cloud providers experience a massive outage, affecting a multimillion userbase, and who knows how many systems worldwide. Not only do these massive outages expose the fickle and brittle nature of the increasingly well-connected global cloud infrastructures, they also reiterate the cloud\u2019s complexity and demand for solid development and infrastructure oversight. In this article, we break down both outage incidents including the timing, the technical cause of the incidents, overview of the service impact, and much-needed lessons for cloud architects and DevOps dots.<\/span><!--more--><\/p>\n<h2><b>AWS Outage on October 20, 2025: DNS Race Condition and Service Cascade<\/b><\/h2>\n<h4><b>Incident Timeline<\/b><\/h4>\n<ul>\n<li><b>12:11 AM PDT (07:11 UTC)<\/b><span style=\"font-weight: 400;\">: AWS begins noticing increased error rates and latency in the US-EAST-1 region.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li><b>2:01 AM PDT (09:01 UTC)<\/b><span style=\"font-weight: 400;\">: Root cause identified as a <\/span><b>DNS resolution failure<\/b><span style=\"font-weight: 400;\"> affecting DynamoDB API endpoints.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li><b>3:35 AM PDT (10:35 UTC)<\/b><span style=\"font-weight: 400;\">: AWS initiates mitigation efforts; partial service recovery starts.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li><b>4:08 AM PDT (11:08 UTC)<\/b><span style=\"font-weight: 400;\">: Restoration continues for <a href=\"https:\/\/opstree.com\/blog\/2021\/11\/30\/ec2-store-overview-difference-b-w-aws-ebs-and-instance-store\/\" target=\"_blank\" rel=\"noopener\">EC2<\/a>, Lambda, SQS, and other dependent services.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li><b>12:15 PM PDT (19:15 UTC)<\/b><span style=\"font-weight: 400;\">: Substantial recovery reported across core services.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li><b>3:00 PM PDT (22:00 UTC)<\/b><span style=\"font-weight: 400;\">: Full restoration declared by AWS.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h3><b>Root Cause and Technical Details<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The root cause of the outage was initiated by a software update to the API of DynamoDB, which inadvertently caused a race condition within the DNS cache of AWS\u2019 internal network.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">This activity affected the internal DNS records of DynamoDB and prevented the clients and other AWS services that depended on it from resolving the critical service endpoints.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The DNS failure percolated into a failure of over 113 <a href=\"https:\/\/opstree.com\/aws-partner\/\" target=\"_blank\" rel=\"noopener\">AWS services<\/a> and products including Lambda, CloudFormation, Cognito, and IAM as the internal API communication was blocked.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Nevertheless, the network observations of Ashburn, Virginia, AWS edge nodes confirmed the problems of packet loss, which depicted the problems as infrastructure-level failures rather than customer-side problems.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The cascading patterns of the failure depicted the web of critical dependency that surrounds the operations of AWS data centers. The failure stretched beyond DynamoDB into exhaustion and throttling problems in the interconnected services.<\/span><\/li>\n<\/ul>\n<p><strong>[ Also Read:\u00a0 <a href=\"https:\/\/opstree.com\/blog\/2025\/05\/28\/aws-for-beginners-what-is-it-how-it-works-and-key-benefits\/\" target=\"_blank\" rel=\"noopener\">AWS For Beginners: What Is It, How It Works, and Key Benefits<\/a> ]<\/strong><\/p>\n<h3><b>Service Impact Examples<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Major consumer apps like Snapchat and Roblox went offline or operated with severe delays.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Financial institutions faced transaction delays due to DynamoDB unavailability.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">E-commerce platforms halted order processing.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Over 17 million incident reports were generated worldwide during the outage, underscoring its vast impact.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h3><b>Recovery and Mitigation<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">AWS engineers performed <\/span><b>traffic rerouting<\/b><span style=\"font-weight: 400;\"> away from impacted nodes.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Rolling back the faulty API update was critical to restoring DNS integrity.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A phased backlog processing approach was employed to avoid secondary overloads.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Post-event analysis emphasized the need for improved DNS cache validation and routing resilience.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<p><strong>[Our Case Study: <a href=\"https:\/\/opstree.com\/case-study\/migrating-from-on-prem-to-aws-with-enhanced-observability-security-and-cost-optimization\/\" target=\"_blank\" rel=\"noopener\">Migrating from On-Prem to AWS with Enhanced Observability, Security, and Cost Optimization<\/a>]<\/strong><\/p>\n<h2><b>Microsoft Azure Outage on October 29, 2025: Misconfiguration of Azure Front Door<\/b><\/h2>\n<h3><b>Incident Timeline<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>15:45 UTC (8:45 AM PDT)<\/b><span style=\"font-weight: 400;\">: Initial errors and increased latency detected on Azure Front Door (AFD).<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>16:00 UTC (9:00 AM PDT)<\/b><span style=\"font-weight: 400;\">: Public acknowledgment by Microsoft of an outage linked to a configuration deployment error.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>17:51 UTC<\/b><span style=\"font-weight: 400;\">: Microsoft confirms inadvertent configuration change as root cause.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>00:05 UTC (5:05 PM PDT Oct 30)<\/b><span style=\"font-weight: 400;\">: Full service restoration after progressive rollback and traffic rerouting.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h3><b>Root Cause and Technical Details<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A faulty tenant configuration deployment that bypassed Microsoft\u2019s safety validation corrupted global routing tables in Azure Front Door<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Since the AFD processes all global HTTP\/HTTPS traffic, the aforementioned cause entailed widespread routing failures, connection drops, TLS handshake errors and authentications refused.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The outage encompassed every Azure region in existence since the AFD is a global traffic review point<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Impact on end users and respective infrastructure systems: In this case, the cascading chain of failure was triggered across the most critical business services. Therefore, the event underscores the dangers of widespread deployment automation with insufficient validation systems.<\/span><\/li>\n<\/ul>\n<h3><b>Service Impact Examples<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Microsoft 365 productivity apps (Outlook, Teams) faced outages.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Xbox Live services and Minecraft authentication failed.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Azure SQL databases experienced access issues.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Major enterprises, airlines like Alaska Airlines, financial services, retailers (Walmart, Costco), and educational platforms reported service disruptions.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Downdetector recorded over <\/span><b>16,000 user reports<\/b><span style=\"font-weight: 400;\"> for Azure and <\/span><b>9,000 for Microsoft 365<\/b><span style=\"font-weight: 400;\"> during the peak.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<p><strong>[ Also Read:\u00a0 <a href=\"https:\/\/opstree.com\/blog\/2025\/10\/14\/data-engineering-with-azure-databricks\/\" target=\"_blank\" rel=\"noopener\">The Ultimate Guide to Cloud Data Engineering with Azure, ADF, and Databricks<\/a>\u00a0]<\/strong><\/p>\n<h3><b>Recovery and Mitigation<\/b><\/h3>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\"><a href=\"https:\/\/www.microsoft.com\/en-in\/microsoft-365\/onedrive\/online-cloud-storage\" target=\"_blank\" rel=\"noopener\">Microsoft<\/a> immediately rolled back to a previous known good configuration.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Traffic was progressively rerouted out of Azure Front Door to maintain service continuity.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Following incident analysis revealed a software defect in the deployment safeguarding mechanism that enabled the faulty configuration to bypass validations.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Recommendations included introducing enhanced automated testing, incremental deployment strategies, and improved fail-safes.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h2><b>Lessons from Combined Outages: Key Takeaways for DevOps and Cloud Teams<\/b><\/h2>\n<table>\n<tbody>\n<tr>\n<td><b>Lesson<\/b><\/td>\n<td><b>Explanation<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Cloud Has Single Points of Failure<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Even giants like AWS and Azure can fail due to concentrated critical services or misconfigurations.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">DNS is a Critical Backbone<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Disruption in DNS resolution cascades widely affecting cloud service accessibility and stability.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Automation Needs Rigorous Control<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Faulty automated deployments require comprehensive validation and rollback strategies to avoid outages.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Multi-Region and Multi-Cloud Resilience<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Architectures must span multiple regions and providers to mitigate isolated regional failures.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Real-Time Monitoring and Alerting<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Continuous observability enables early detection and faster incident response to contain failures.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Incident Response Preparedness<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Phased restoration and backlog clearing are critical to avoid secondary failures post-outage.<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Chaos Engineering Applicability<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Regular failure simulations uncover weaknesses before production incidents.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2><b>Practical Strategies to Build Cloud Resilience<\/b><\/h2>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Use Infrastructure as Code (IaC)<\/b><span style=\"font-weight: 400;\"> with tools like Terraform, <a href=\"https:\/\/opstree.com\/blog\/2024\/06\/13\/devops-cloud-migration\/\" target=\"_blank\" rel=\"noopener\">CloudFormation<\/a> for consistent deployments.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Design Active-Active Multi-Region Architectures<\/b><span style=\"font-weight: 400;\"> using AWS Global Accelerator or Azure Traffic Manager.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Deploy Multi-Cloud Disaster Recovery:<\/b><span style=\"font-weight: 400;\"> AWS Elastic Disaster Recovery and Azure Site Recovery enable cross-platform failover.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Employ Canary and Blue-Green Deployment Models<\/b><span style=\"font-weight: 400;\"> to reduce deployment risk.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Invest in Real-Time Observability Solutions<\/b><span style=\"font-weight: 400;\"> like Datadog, New Relic, Azure Monitor for proactive fault detection.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Regularly Conduct Chaos Engineering Experiments<\/b><span style=\"font-weight: 400;\"> using tools like Gremlin or Chaos Mesh to simulate DNS and routing failures.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h2><b>References<\/b><\/h2>\n<table>\n<tbody>\n<tr>\n<td><b>No.<\/b><\/td>\n<td><b>Source<\/b><\/td>\n<td><b>Coverage<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">1<\/span><\/td>\n<td><span style=\"font-weight: 400;\">ThousandEyes: AWS Outage Analysis October 20, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Detailed network and timeline analysis<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">2<\/span><\/td>\n<td><span style=\"font-weight: 400;\">AWS PlainEnglish: AWS Outage Case Study October 20, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Incident technical breakdown and recovery<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">3<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Reuters: Amazon AWS Cloud Service Recovery October 20, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Service impact and global effects<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">4<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Aljazeera: What Caused Amazon&#8217;s AWS Outage October 20, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Cause and impact overview<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">5<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Breached Company: Azure Front Door Outage October 29, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Timeline and global routing failure<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">6<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Microsoft Azure Status History and LinkedIn Post October 29-30, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Official cause and recovery actions<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">7<\/span><\/td>\n<td><span style=\"font-weight: 400;\">ThousandEyes Blog: Azure Front Door Outage Analysis October 29, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Technical routing failure explanation<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">8<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Economic Times: Azure Outage Latest Update October 29, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Service restoration and error details<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">9<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Senthorus Blog: Azure Outage October 29, 2025<\/span><\/td>\n<td><span style=\"font-weight: 400;\">Impact on security operations and cloud resilience lessons<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>October 2025 is still tough in cloud computing, as Amazon Web Services and Microsoft Azure two major cloud providers experience a massive outage, affecting a multimillion userbase, and who knows how many systems worldwide. Not only do these massive outages expose the fickle and brittle nature of the increasingly well-connected global cloud infrastructures, they also &hellip; <a href=\"https:\/\/opstree.com\/blog\/2025\/11\/04\/aws-and-azure-outages\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Complete Case Study On The AWS and Azure Outages Of October 2025&#8221;<\/span><\/a><\/p>\n","protected":false},"author":244582682,"featured_media":29835,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[36349927],"tags":[768739294,413107854,5767724,343865,768739407],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/opstree.com\/blog\/wp-content\/uploads\/2025\/11\/AWS-Azure.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pfDBOm-7L8","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/29830"}],"collection":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/users\/244582682"}],"replies":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/comments?post=29830"}],"version-history":[{"count":5,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/29830\/revisions"}],"predecessor-version":[{"id":29836,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/29830\/revisions\/29836"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media\/29835"}],"wp:attachment":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media?parent=29830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/categories?post=29830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/tags?post=29830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}