<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[MinistryofDevOps]]></title><description><![CDATA[Simplifying DevOps challenges and learning]]></description><link>https://ministryofdevops.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 13:48:49 GMT</lastBuildDate><atom:link href="https://ministryofdevops.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Ship Context, Not Docs: How AI Is Rewriting the Rules of Software Delivery

]]></title><description><![CDATA[Ship Context, Not Docs: How AI Is Rewriting the Rules of Software Delivery
The way we deliver software to other developers is fundamentally changing. And most teams haven't noticed yet.
For decades, t]]></description><link>https://ministryofdevops.com/ship-context-not-docs-how-ai-is-rewriting-the-rules-of-software-delivery</link><guid isPermaLink="true">https://ministryofdevops.com/ship-context-not-docs-how-ai-is-rewriting-the-rules-of-software-delivery</guid><category><![CDATA[AI]]></category><category><![CDATA[GitHubCopilot ]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[claude]]></category><category><![CDATA[getting started]]></category><category><![CDATA[README]]></category><category><![CDATA[Devops]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[context engineering]]></category><dc:creator><![CDATA[Mudit Kumar]]></dc:creator><pubDate>Thu, 12 Mar 2026 06:36:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6219790945bf7fc41cc9af6a/b30a1f66-87bb-4dd1-83fb-bc9236e4cd59.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Ship Context, Not Docs: How AI Is Rewriting the Rules of Software Delivery</h1>
<p>The way we deliver software to other developers is fundamentally changing. And most teams haven't noticed yet.</p>
<p>For decades, the developer experience playbook looked the same: write a README, create a getting started guide, maintain API documentation, build a wiki. The assumption was always that a <em>human</em> would read these documents, understand them, and then write code.</p>
<p>That assumption is now wrong — or at the very least, incomplete.</p>
<p>Today, the first consumer of your project's documentation is increasingly not a human. It's an AI coding agent. Claude Code, GitHub Copilot, Cursor, Windsurf — these tools are how a growing number of developers interact with codebases for the first time. And these tools don't read your beautifully formatted Confluence page. They read context files.</p>
<h2>The Old World: Documentation for Humans</h2>
<p>We all know what traditional documentation looks like. A README with installation steps. A "Getting Started" guide with screenshots. API reference docs generated from code comments. Architecture Decision Records buried in a wiki no one updates.</p>
<p>And let's be honest — even humans struggled with this model. Docs went stale the moment they were written. New joiners spent days piecing together tribal knowledge. The getting started guide assumed a setup that hadn't existed for three sprints.</p>
<p>The problem was never a lack of documentation tools. The problem was that documentation lived outside the development workflow. It was a second-class citizen, maintained out of guilt rather than necessity.</p>
<h2>The New World: Context Files as Documentation</h2>
<p>Something different is happening now. Developers are creating files that live <em>inside</em> the repository, written specifically to give AI tools the context they need to work effectively with the codebase. And these files are becoming the most accurate, most up-to-date documentation in the entire project.</p>
<p>Here's what this looks like in practice across the major AI tools:</p>
<h3>CLAUDE.md — Claude Code's Project Memory</h3>
<p>When you use Claude Code, it automatically reads a <code>CLAUDE.md</code> file from your project root at the start of every session. This file becomes the AI's persistent understanding of your project — the architecture, the conventions, the commands, the things it should never do.</p>
<p>A typical <code>CLAUDE.md</code> might include:</p>
<pre><code class="language-markdown"># Project Context

This is a Node.js microservices platform using TypeScript.
We use pnpm workspaces for monorepo management.

## Key Commands
- `pnpm test` — runs all tests
- `pnpm build` — builds all packages  
- `pnpm lint` — runs ESLint + Prettier check

## Architecture
- /packages/api — Express REST API
- /packages/worker — Bull queue processors  
- /packages/shared — Shared types and utilities

## Conventions
- Use 2-space indentation
- All API endpoints must have integration tests
- Never import directly from /shared/internal
- Commit messages follow conventional commits
</code></pre>
<p>This isn't documentation in the traditional sense. It's a briefing document. It's what you'd tell a competent new team member on their first day — except the new team member has perfect recall and will follow every instruction exactly.</p>
<p>The Claude Code ecosystem has taken this further with hierarchical context: a global <code>~/.claude/CLAUDE.md</code> for personal preferences, project-level files for repo-specific context, and <code>.claude/rules/</code> directories for path-scoped instructions that only activate when working on specific parts of the codebase. There's also <code>SKILL.md</code> files in <code>.claude/skills/</code> — reusable workflows that Claude loads on demand for specialised tasks like generating documents, running migrations, or scaffolding components.</p>
<h3>.github/copilot-instructions.md — GitHub Copilot's Rulebook</h3>
<p>GitHub Copilot takes a similar approach. You create a <code>.github/copilot-instructions.md</code> file, and Copilot automatically includes it as context for every chat and agent request within that repository.</p>
<p>But GitHub has gone further with path-specific instructions. You can create multiple <code>.instructions.md</code> files under <code>.github/instructions/</code>, each scoped to specific files or directories using YAML frontmatter:</p>
<pre><code class="language-markdown">---
applyTo: "src/api/**/*.ts"
---

All API handlers must:
- Validate input using Zod schemas
- Return standardized error responses
- Include request tracing headers
- Log to structured JSON format
</code></pre>
<p>This means your backend code gets different AI guidance than your frontend code. Your test files get different instructions than your production code. The AI adapts its behaviour based on <em>where</em> it's working in your project.</p>
<p>But instructions are just the beginning. GitHub has built an entire composable system of context primitives inside your <code>.github/</code> directory:</p>
<p><strong>Skills</strong> (<code>.github/skills/&lt;skill-name&gt;/SKILL.md</code>) are self-contained folders of instructions, scripts, and resources that Copilot loads automatically when relevant to your task. Think of them as reusable playbooks — an incident triage workflow, a CI/CD troubleshooting guide, or a testing pattern specific to your project. When you ask Copilot something that matches a skill's description, it loads the full instructions and follows them, including any bundled scripts or templates.</p>
<p><strong>Reusable Prompts</strong> (<code>.github/prompts/&lt;name&gt;.prompt.md</code>) are pre-written workflows you invoke with a slash command. Type <code>/deploy-checklist</code> in chat and Copilot executes a multi-step deployment verification process your team defined once and everyone reuses. This is the "getting started guide as a prompt" idea made real.</p>
<p><strong>Custom Agents</strong> (<code>.github/agents/&lt;name&gt;.agent.md</code>) let you create specialised personas — a Security Reviewer that can only read code and run linters but never edit files, or a Planner agent that outputs an implementation plan before handing off to an Implementation agent. Each agent gets its own tool set and constraints.</p>
<p><strong>Chat Modes</strong> (<code>.github/chatmodes/&lt;name&gt;.chatmode.md</code>) define predefined configurations that tailor AI behaviour for specific tasks — a DBA persona with access to database tools, or a code reviewer focused only on performance patterns.</p>
<p><strong>Hooks</strong> sit underneath everything, executing custom shell commands at key points during agent execution — enforcing linting, running security scans, or logging actions regardless of which other primitive triggered the work.</p>
<p>The key insight from GitHub's design is that these aren't competing features — they're a composable system. A custom agent can reference instruction files. A prompt can run inside a specific agent. A skill can bundle scripts that any agent invokes. Hooks enforce hard gates at execution time. Together, they form a complete "AI developer experience" that lives entirely inside version-controlled files in your repository.</p>
<p>GitHub also supports <code>AGENTS.md</code> files — a cross-tool standard that works with Copilot, Claude Code, and other AI agents, giving you a single instruction file that multiple tools can understand.</p>
<h3>.cursorrules and .cursor/rules/ — Cursor's Configuration Layer</h3>
<p>Cursor uses <code>.cursorrules</code> files (and the newer <code>.cursor/rules/</code> directory) to customise AI behaviour per project. These files tell the AI what framework you're using, what patterns to follow, and critically, what patterns to avoid.</p>
<p>A typical <code>.cursorrules</code> might look like:</p>
<pre><code class="language-markdown">You are working on a Next.js 14 app with App Router, TypeScript, and Tailwind CSS.

## Tech Stack
- Framework: Next.js 14 (App Router, Server Components by default)
- Language: TypeScript (strict mode)
- Styling: Tailwind CSS, no CSS modules
- State: Zustand for client state, TanStack Query for server state
- Testing: Vitest + React Testing Library

## Patterns to Follow
- Use Server Components unless client interactivity is needed
- Co-locate components with their tests: Button.tsx + Button.test.tsx
- All API calls go through /lib/api/ — never fetch directly in components
- Use Zod schemas for all form validation and API responses
- Error boundaries at route segment level, not per component

## Patterns to Avoid
- Never use "use client" at page level — push it to leaf components
- Never use barrel exports (index.ts re-exports) — they break tree shaking
- Never use any — use unknown + type narrowing instead
- Do not use relative imports beyond one level — use @/ path alias
</code></pre>
<p>The Cursor community has built an entire ecosystem around this. Sites like cursor.directory offer pre-built rule sets for every framework and language combination — drop in a React + TypeScript ruleset and Cursor immediately knows your conventions.</p>
<p>The newer rules system supports different activation modes: always-on rules, rules that auto-attach based on file patterns, and rules the agent can request when it determines they're relevant. It's documentation that activates contextually.</p>
<h2>The New Documentation Stack: A Complete Comparison</h2>
<p>To put it all together, here's how the old documentation model maps to the new context file model across every major AI tool. This is the shift in software delivery documentation happening right now:</p>
<table>
<thead>
<tr>
<th>Purpose</th>
<th>Old World</th>
<th>Claude Code</th>
<th>GitHub Copilot</th>
<th>Cursor</th>
</tr>
</thead>
<tbody><tr>
<td><strong>Project context &amp; conventions</strong></td>
<td>README.md, Wiki, Confluence</td>
<td><code>CLAUDE.md</code></td>
<td><code>.github/copilot-instructions.md</code></td>
<td><code>.cursorrules</code></td>
</tr>
<tr>
<td><strong>Path-specific rules</strong></td>
<td>Inline comments, tribal knowledge</td>
<td><code>.claude/rules/*.md</code></td>
<td><code>.github/instructions/*.instructions.md</code></td>
<td><code>.cursor/rules/*.mdc</code></td>
</tr>
<tr>
<td><strong>Reusable workflows</strong></td>
<td>Runbooks in wiki, bookmarked Slack threads</td>
<td><code>.claude/skills/*/SKILL.md</code></td>
<td><code>.github/skills/*/SKILL.md</code></td>
<td>—</td>
</tr>
<tr>
<td><strong>Getting started / task prompts</strong></td>
<td>Getting Started guide, onboarding docs</td>
<td>Prompt in conversation</td>
<td><code>.github/prompts/*.prompt.md</code></td>
<td>—</td>
</tr>
<tr>
<td><strong>Specialised personas</strong></td>
<td>"Ask Dave, he knows the auth system"</td>
<td>—</td>
<td><code>.github/agents/*.agent.md</code></td>
<td>—</td>
</tr>
<tr>
<td><strong>Interaction modes</strong></td>
<td>—</td>
<td>—</td>
<td><code>.github/chatmodes/*.chatmode.md</code></td>
<td>—</td>
</tr>
<tr>
<td><strong>Automation guardrails</strong></td>
<td>CI/CD scripts, pre-commit hooks</td>
<td>Hooks</td>
<td>Hooks</td>
<td>—</td>
</tr>
<tr>
<td><strong>Cross-tool compatibility</strong></td>
<td>—</td>
<td><code>AGENTS.md</code></td>
<td><code>AGENTS.md</code></td>
<td><code>AGENTS.md</code></td>
</tr>
<tr>
<td><strong>Personal preferences</strong></td>
<td>"My setup notes" in Notes app</td>
<td><code>~/.claude/CLAUDE.md</code></td>
<td><code>~/.copilot/copilot-instructions.md</code></td>
<td>Global Rules in settings</td>
</tr>
</tbody></table>
<p>A few things stand out from this table:</p>
<p><strong>The old "documentation" was scattered across a dozen tools</strong> — wikis, READMEs, Confluence, Slack bookmarks, tribal knowledge in people's heads. The new model consolidates everything into version-controlled files that live alongside the code.</p>
<p><strong>GitHub Copilot has the broadest surface area</strong> with skills, prompts, agents, chat modes, and hooks forming a full composable system. Claude Code focuses on depth with hierarchical context and skills. Cursor keeps it simple with rules files and a strong community ecosystem.</p>
<p><strong>The "Ask Dave" problem is being solved by files.</strong> That senior engineer who knows how the auth system works? Their knowledge now lives in a custom agent definition or a skill file that the entire team — and their AI tools — can access.</p>
<p><strong>Every tool supports</strong> <code>AGENTS.md</code> as a cross-tool standard. This means you can write one set of core instructions and have them work across Claude Code, Copilot, and other AI agents. Invest here first.</p>
<h2>Why This Matters for Software Delivery</h2>
<p>This isn't just a tooling trend. It represents a fundamental shift in how software should be packaged and delivered.</p>
<h3>Context Files Are Always Up-to-Date</h3>
<p>Here's the key insight: developers actually maintain context files because <em>they use them every day</em>. When your <code>CLAUDE.md</code> has a wrong build command, you notice immediately because the AI fails. When your Confluence page has a wrong build command, you don't notice until a new joiner wastes half a day.</p>
<p>Context files create a feedback loop that traditional documentation never had. The documentation is consumed constantly, by an agent that will faithfully execute whatever it says, which means inaccuracies surface and get fixed fast.</p>
<h3>Prompts Are the New Getting Started Guide</h3>
<p>Think about what a "Getting Started" guide really is. It's a series of instructions: clone this, install that, run this command, check this output. It's procedural. It's sequential. It's... a prompt.</p>
<p>The most effective onboarding experience for a new developer using AI tools isn't a 15-page guide — it's a well-crafted prompt that says: "Set up a local development environment for this project, run the tests, and verify everything works." Combined with good context files, the AI can execute this end-to-end, asking for human input only when it genuinely needs it.</p>
<p>Forward-thinking teams are starting to include ready-made AI context across their repos. Here's what a fully AI-native repository structure looks like in 2026:</p>
<pre><code class="language-plaintext">├── CLAUDE.md                          # Claude Code project context
├── AGENTS.md                          # Cross-tool agent instructions
├── .github/
│   ├── copilot-instructions.md        # Copilot global instructions
│   ├── instructions/
│   │   └── api.instructions.md        # Path-specific rules
│   ├── prompts/
│   │   ├── setup-local-env.prompt.md  # Onboarding prompt
│   │   └── add-api-endpoint.prompt.md # Feature workflow prompt
│   ├── agents/
│   │   ├── security-reviewer.agent.md # Read-only security persona
│   │   └── planner.agent.md           # Architecture planning agent
│   ├── chatmodes/
│   │   └── dba.chatmode.md            # Database expert mode
│   └── skills/
│       └── incident-triage/
│           └── SKILL.md               # Reusable triage playbook
├── .claude/
│   ├── rules/                         # Path-scoped Claude rules
│   └── skills/                        # Claude Code skills
├── .cursorrules                       # Cursor project rules
└── .cursor/rules/                     # Scoped Cursor rules
</code></pre>
<p>Each file is a piece of the AI developer experience — version-controlled, team-shared, and consumed every single day. This is the new getting started guide. This is what onboarding looks like in 2026.</p>
<h3>The SDK of the Future Ships with a CLAUDE.md</h3>
<p>If you maintain a library, an SDK, or any software that other developers consume, think about this: the developers using your tool are increasingly going to interact with it through AI agents. The quality of the experience they have is going to depend not just on your API design, but on how well their AI agent understands your tool.</p>
<p>This means shipping your package with context files is no longer optional — it's part of the developer experience. A <code>CLAUDE.md</code> or <code>copilot-instructions.md</code> that explains your library's conventions, common pitfalls, and best practices will directly improve the code that AI agents generate when using your tool.</p>
<h2>What This Means for DevOps and Platform Teams</h2>
<p>If you're running a platform team or internal developer platform, this shift has concrete implications:</p>
<p><strong>CI/CD pipelines should validate context files.</strong> Just as you lint code and check for test coverage, you should verify that context files exist and are consistent with the actual project structure. A <code>CLAUDE.md</code> that references a build command that doesn't exist is a bug.</p>
<p><strong>Templates and scaffolding should include context files.</strong> When your platform generates a new service from a template, that template should include pre-configured <code>CLAUDE.md</code>, <code>copilot-instructions.md</code>, and <code>.cursorrules</code> files appropriate for your organisation's stack and conventions.</p>
<p><strong>Internal documentation strategy should evolve.</strong> The best documentation strategy isn't "write docs in Confluence" or "write context files instead." It's recognising that these serve different audiences. High-level architecture, decision rationale, and business context still belong in human-readable docs. Build commands, conventions, file structure, and coding patterns belong in context files.</p>
<p><strong>Onboarding metrics will change.</strong> Instead of measuring "time to first commit," we should be measuring "time to first AI-assisted feature." The teams that have good context files will see dramatically faster onboarding for developers who use AI tools — which is rapidly becoming all of them.</p>
<h2>The Practical First Step</h2>
<p>If you take one thing from this post, make it this: go to your most important repository and create a <code>CLAUDE.md</code> file. Put in it the things you'd tell a new team member on day one:</p>
<ol>
<li><p>What the project does (one paragraph)</p>
</li>
<li><p>How to build and test it (the actual commands)</p>
</li>
<li><p>The project structure (what lives where)</p>
</li>
<li><p>The conventions that matter (naming, patterns, things to avoid)</p>
</li>
<li><p>Common pitfalls (the things that always trip people up)</p>
</li>
</ol>
<p>Then create a <code>.github/copilot-instructions.md</code> with the same content. And a <code>.cursorrules</code> file if your team uses Cursor.</p>
<p>You'll notice something interesting: within a week, these files will be more accurate than any documentation you've maintained in the past year. Because you'll be using them every single day, and when they're wrong, you'll feel it immediately.</p>
<h2>The Future Is Context-Engineered</h2>
<p>We're moving from a world where software is documented to a world where software is <em>context-engineered</em>. The distinction matters. Documentation is something you write after the fact for someone who might read it someday. Context engineering is something you build alongside your code, for an agent that will consume it in every session, every day.</p>
<p>The teams that understand this shift will ship faster, onboard developers faster, and build more consistent codebases. The teams that don't will wonder why their AI tools seem so much less effective than everyone else's.</p>
<p>The new documentation is a context file. The new getting started guide is a prompt. The new developer experience is AI-native.</p>
<p>Ship context, not just docs.</p>
<hr />
<p><em>What context files is your team using? I'd love to hear what's working for you. Find me on</em> <a href="https://hashnode.com/@muditcse"><em>Hashnode</em></a> <em>or drop a comment below.</em></p>
]]></content:encoded></item><item><title><![CDATA[Cloud Cost: Shifting Left, Right & Center]]></title><description><![CDATA[Cloud is not cheap and with the exponential adoption of cloud in the last half-decade, that myth of "cloud is cheap" has already been broken.
In the last 5 years, especially around the pandemic time, we have seen that big enterprises are running a ra...]]></description><link>https://ministryofdevops.com/cloud-cost-shifting-left-right-center</link><guid isPermaLink="true">https://ministryofdevops.com/cloud-cost-shifting-left-right-center</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[cost-optimisation]]></category><category><![CDATA[cloudcost]]></category><dc:creator><![CDATA[Mudit Kumar]]></dc:creator><pubDate>Tue, 31 Oct 2023 07:20:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698619328862/b1f86381-79e3-4814-9574-a189c0892bac.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cloud is not cheap and with the exponential adoption of cloud in the last half-decade, that myth of "cloud is cheap" has already been broken.</p>
<p>In the last 5 years, especially around the pandemic time, we have seen that big enterprises are running a race for cloud adoption, and what was promised to look like a sprint has now become a never-ending marathon.</p>
<p>The top two concerns for the cloud adoption journey are the lack of talent and skyrocketing cloud cost, and though they both are related in some way or the other, in this blog, we will only focus on cloud cost.</p>
<p>The next obvious question becomes how we optimize cloud cost and whether/how we do it, who does it and how to maintain it for the long run. During the very initial stage of our development or let's say in the dev environment, integrate all possible steps in CI as a shift left approach, or only after we have gone live with some limited features or as per the gradual maturity/progress of our environment which means taking cautious steps as we understand our workloads &amp; requirements. (centre/towards-right).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698734291941/19b1ffba-a6b9-4c09-ab99-962cc44a45c3.png" alt class="image--center mx-auto" /></p>
<p>Role of CSPs</p>
<p>Cloud Service Providers have not done enough to help control costs for consumers. When I say not enough has been done I mean they have not socialized and provided enough education to consumers so that they can take proactive measures to control costs rather it is all reactive despite some processes being in place.</p>
<p>This has opened the door for third-party vendors and start-ups that are claiming to be expert in cloud cost optimization and, clearly, headroom and space has been provided to these startups by CSPs.</p>
<p>With IaaC, why not CSPs provide out-of-the-box inbuilt solutions and APIs to let cost not only go over the budget but also tell about the curated infra within that pipeline? Even if some APIs have been made available, why do customers need to write code for them to manage costs?</p>
<p>Why not cost has been considered as Quality gates or even as a gate to have it as a shift left approach in a straightforward manner without any ifs and buts?</p>
<p>FinOps</p>
<p>Similarly, I also see recommendations to set Big FinOps Team as cost consumption rather than cost reduction. And, again, it is reactive. Are the FinOps able to proactively suggest, drive, and industrialize any effective cost-saving measure among engineering teams? And isn't it a Shift Right approach?</p>
<p>The Human Nature</p>
<p>Do we ever forget to switch off electrical equipment in our house when not in use? For most of us answer is NO, because we are paying from our pocket. If we have the same thought process towards our cloud resources usage, utilization and cost optimization, we will not only save on cost but also contribute towards a greener environment and reduced carbon emission.</p>
<p>Thanks for your time and hope it helps!</p>
<p>As usual, do provide your feedback.</p>
<p><a target="_blank" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=muditcse">Follow me on LinkedIn</a></p>
]]></content:encoded></item><item><title><![CDATA[Being an Impactful DevOps]]></title><description><![CDATA[Being an Impactful, valuable, and effective DevOps is not limited to tools and technology. Knowing Jenkins, Python, Cloud Certification, CKA, Linux, Security, and Networking is all good and required to be a successful DevOps/SRE/PE but the journey st...]]></description><link>https://ministryofdevops.com/being-an-impactful-devops</link><guid isPermaLink="true">https://ministryofdevops.com/being-an-impactful-devops</guid><category><![CDATA[Devops]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Mudit Kumar]]></dc:creator><pubDate>Sat, 26 Aug 2023 22:18:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693087987763/c416940c-8619-4c43-b5e6-aa67075f138c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Being an Impactful, valuable, and effective DevOps is not limited to tools and technology. Knowing Jenkins, Python, Cloud Certification, CKA, Linux, Security, and Networking is all good and required to be a successful DevOps/SRE/PE but the journey starts from here. We need a plan for the long run.</p>
<p>In this article,i have shared a few common habits or ways of working on how to be an impactful DevOps as per my experience.</p>
<h3 id="heading-learn-the-concept-not-the-tool">Learn the concept, not the tool</h3>
<p>The number of tools will always keep growing and it's impossible to learn everything. Instead, if you focus on the basic concepts, learning any new tool is not tough. If you know about the #CI pipeline or say #DevSecOps Pipeline, you just need to understand which stages you need to set up and what each one of them does, irrespective of whether you use #Azure #DevOps, #GitHub Actions, or Jenkins.</p>
<p>Similarly, don't run behind getting certified on multiple clouds. All the clouds have more similarities than differences, like, it's just the difference in how the Cloud Portal looks, naming conventions, and differences in the few offered services..they all run on hardware and use modern hypervisors.</p>
<p>Another example is #Kubernetes. Learn some distributed system concepts, networking concepts, and Unix concepts before jumping on to getting CKA certified or even starting your K8s journey. A lot of people provide just the required content and training courses which will help you to clear the exam and get CKA certified but that will not help in the long run.</p>
<h3 id="heading-always-ask-yourself-why-you-are-doing-this-task-what-are-the-associated-benefits-and-business-value">Always ask yourself, why you are doing this task. What are the associated benefits and Business value?</h3>
<p>Before Jumping on to the ask and the task, you should have clarity of the associated benefits and business value of the objective of the task. If you have that clarity, you will be able to execute the assigned task with better alignment and without going back again and again to your POs/lead/Architect and repeatedly asking for clarity.</p>
<h3 id="heading-see-the-bigger-picture-think-forward-and-be-proactive">See the Bigger Picture, Think Forward, and Be Proactive</h3>
<p>As a #DevOps, it's very important to see the bigger picture and not just be limited to the assigned task or issue. Always try to think from the End to end-to-end architecture point of view and try to connect existing dots with your solution.</p>
<p>Always think about the future impact that your assigned task will have once it is rolled. This is even more important in the case if you are part of the central team and going to roll out solutions to application teams frequently. Being in the shoes of your consumers and understanding the pain is a must from a DevOps perspective.</p>
<p>Being Proactive is a very important skill to be an impactful DevOps.Thinking about those scenarios or tasks or dependencies or challenges as early as possible will make you stand out of the queue.</p>
<h3 id="heading-challenge-the-people-and-processes">Challenge the People and Processes.</h3>
<p>Why?..is always a challenging question to answer but if you can ask it with the right logic and reasons, it's always appreciated. Following the already stated process and procedures is easy but then carefully studying, finding the loopholes and then proposing the solution within the same process which brings some business value and productivity improvement, is the real thing.</p>
<p>Also, it's very easy to blame and make fun of the people who had first laid down those processes in the past, but remember, they did it with some forward thinking and that's why it's working presently.</p>
<h3 id="heading-propose-scalable-solutions">Propose scalable solutions</h3>
<p>For any issue, whether small or big, always try to propose #scalable solutions. Sometimes, you might need a tactical fix but at the same time, that tactical fix should also be on the lines of strategic and scalable solutions and shouldn't be a duplicate effort altogether.</p>
<h3 id="heading-home-lab">Home Lab</h3>
<p>Theory exists in the mind, not in systems. You should have a home lab as a playground to do small PoCs to get to the core concepts and drill down on those particular tools and should try to implement different use cases that are not normally mentioned in Getting Started guides and copy-paste tutorials. Start doing the things THE HARD WAY.</p>
<p>Get your hands dirty by doing small PoCs locally before going into Architectural/Deep-Dive/Technical discussions.Google/Theoritical knowledge may fire back if the group has smarter people.</p>
<p>Get yourself at least a Linux and a Windows desktop to stand out from the crowd and you will start seeing the impact immediately. If possible, you can even try to assemble the PC yourself and believe me, you will love it. Also, keep a bootable disk or USB drive ready and not be afraid to break the OS.</p>
<h3 id="heading-be-flexible">Be flexible</h3>
<p>Don't be rigid with your thought processes, knowledge and choice of tools and technology. Remember you are trying to solve a business problem through technology so never let technology run over your ego.</p>
<h3 id="heading-be-up-to-date-on-tech-trends">Be up-to-date on tech trends</h3>
<p>It's very important to be always up to date on the latest tech trends and get upskilled. However, it is not that easy as there is loads of information floating around which can create fatigue. We have to be very careful about the choices we make and plan where we see ourselves in the next years with our current skills and against the changing tech trends.</p>
<p>In my next blog, I will share a few things, which I do to keep myself updated with the latest trends in the IT industry as per my interest.</p>
<p>I will be eagerly waiting to hear feedback.</p>
<p>Happy Reading!</p>
<p><a target="_blank" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=muditcse">Follow on LinkedIn</a></p>
]]></content:encoded></item><item><title><![CDATA[Cybersecurity 101]]></title><description><![CDATA[In this blog, I will briefly discuss very common terms used in day to today life of Security Engineers. I hope this will be helpful for people who want to start their journey in cybersecurity or just want to understand the basics of frequently used t...]]></description><link>https://ministryofdevops.com/cybersecurity-101</link><guid isPermaLink="true">https://ministryofdevops.com/cybersecurity-101</guid><category><![CDATA[#cybersecurity]]></category><category><![CDATA[securityawareness]]></category><category><![CDATA[threat]]></category><category><![CDATA[CybersecurityAwareness]]></category><category><![CDATA[CyberSecurity101]]></category><dc:creator><![CDATA[Mudit Kumar]]></dc:creator><pubDate>Thu, 10 Aug 2023 19:50:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691696911718/2f4e6d59-e349-4487-808a-be9e09e32907.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, I will briefly discuss very common terms used in day to today life of Security Engineers. I hope this will be helpful for people who want to start their journey in cybersecurity or just want to understand the basics of frequently used terms.</p>
<table><tbody><tr><td><p>Cybersecurity</p></td><td><p>Cybersecurity relates to processes employed to safeguard and secure assets used to carry information of an organization from being stolen or attacked. It requires extensive knowledge of possible threats such as viruses or other malicious objects. Identity management, risk management, and incident management form the crux of the cybersecurity strategies of an organization.</p></td></tr><tr><td><p>Red Team</p></td><td><p>Red teams are offensive. They attack their own systems to find the vulnerabilities of a company’s network and infrastructure within a controlled environment (i.e. penetration testing, threat emulation, threat hunting). This is critical for security as it shows the real strength of the infrastructure, by stress-testing the defense mechanisms the Blue team created.</p></td></tr><tr><td><p>Blue Team</p></td><td><p>Blue teams are defensive. They employ defensive strategies to prevent external parties from getting access to critical infrastructure (i.e. antiviruses, firewalls, security policies, access procedures, compliance rules). Most security departments will have at least a Blue team.</p></td></tr><tr><td><p>Purple Team</p></td><td><p>Purple teams act as an intermediary that allows Red and Blue teams to communicate. Ideally, a Blue team can install defenses, and a Red team will attack them and report if it found any exploits or weaknesses. The Purple team should then review this report with the Blue team and help them define a strategy to fix the issues. This creates a feedback loop between the two teams, so they can collaborate more effectively to patch vulnerabilities and establish a stronger more secure environment.</p></td></tr><tr><td><p>Clickjacking</p></td><td><p>Clickjacking involves tricking someone into clicking on one object on a web page while they think they are clicking on another. The attacker loads a transparent page over the legitimate content on the web page so that the victim thinks they are clicking on a legitimate item when they are really clicking on something on the attacker’s invisible page. This way, the attacker can hijack the victim’s click for their own purposes. Clickjacking could be used to install malware, gain access to one of the victim’s online accounts, or enable the victim’s webcam.</p></td></tr><tr><td><p>Attack Vectors</p></td><td><p>An Attack Vector is the collection of all vulnerable points by which an attacker can gain entry into the target system. Attack vectors include vulnerable points in technology as well as human behavior, skillfully exploited by attackers to gain access to networks. The growth of IoT devices and (Work from Home) have greatly increased the attack vector, making networks increasingly difficult to defend.</p></td></tr><tr><td><p>Attack Surface</p></td><td><p>The attack surface is <strong>the number of all possible points, or attack vectors, where an unauthorized user can access a system and extract data</strong>. The smaller the attack surface, the easier it is to protect.</p></td></tr><tr><td><p>SIEM</p></td><td><p>Security Information and Event Management (SIEM) is a formal process by which the security of an organization is monitored and evaluated on a constant basis. SIEM helps to automatically identify systems that are out of compliance with the security policy as well as to notify the IRT (Incident Response Team) of any security-violating events.</p></td></tr><tr><td><p>SOAR</p></td><td><p>SOAR (Security Orchestration, Automation and Response) is a solution stack of compatible software programs that organizations use to collect data about security threats from across the network and respond to low-level security events without human assistance.</p></td></tr><tr><td><p>Threat Actor</p></td><td><p>A threat actor, also known as a malicious actor, is <strong>any person or organization that intentionally causes harm in the digital sphere</strong>. They exploit weaknesses in computers, networks and systems to carry out disruptive attacks on individuals or organizations.</p></td></tr><tr><td><p>Advanced Persistent Threat (APT)</p></td><td><p>In an APT attack, a threat actor uses the most sophisticated tactics and technologies to penetrate a high-profile network. APTs aim to stay ‘under the radar’ and explore the network while remaining undetected for weeks, months, and even years. APTs are most often used by nation-state threat actors wishing to cause severe disruption and damage to the economic and political stability of a country. They can be considered the cyber equivalent of espionage ‘sleeper cells’.</p></td></tr><tr><td><p>Advanced Threat Protection (ATP)</p></td><td><p>Advanced Threat Protection (ATP) are security solutions that defend against sophisticated malware or hacking attacks targeting sensitive data. Advanced Threat Protection includes both software and managed security services.</p></td></tr><tr><td><p>Brute Force Attack</p></td><td><p>This is a method for guessing a password (or the key used to encrypt a message) that involves systematically trying a high volume of possible combinations of characters until the correct one is found. One way to reduce the susceptibility to a Brute Force Attack is to limit the number of permitted attempts to enter a password – for example, by allowing only three failed attempts and then permitting further attempts only after 15 minutes.</p></td></tr><tr><td><p>Detection and Response</p></td><td><p>Network Detection and Response is a security solution category used by organizations to detect malicious network activity, perform a forensic investigation to determine the root cause, and then respond and mitigate the threat.</p></td></tr><tr><td><p>Endpoint Detection and Response (EDR)</p></td><td><p>Endpoint Detection and Response (EDR) are tools for protecting computer endpoints from potential threats. EDR platforms comprise software and networking tools for detecting suspicious endpoint activities, usually via continuous network monitoring.</p></td></tr><tr><td><p>Honeypot</p></td><td><p>Honeypots are computer security programs that simulate network resources that hackers are likely to look for to lure them in and trap them. An attacker may assume that you’re running weak services that can be used to break into the machine. A honeypot provides you with advanced warning of a more concerted attack. Two or more honeypots on a network form a honeynet.</p></td></tr><tr><td><p>MITRE ATT&amp;CK™ Framework</p></td><td><p>The MITRE ATT&amp;CK™ framework is a comprehensive matrix of tactics and techniques used by threat hunters, red teamers, and defenders to better classify attacks and assess an organization’s risk. The aim of the framework is to improve post-compromise detection of adversaries in enterprises by illustrating the actions an attacker may have taken.</p></td></tr><tr><td><p>CIS(Center for Internet Security)</p></td><td><p>The Center for Internet Security (CIS) publishes the CIS Critical Security Controls (CSC) to help organizations better defend against known attacks by distilling key security concepts into actionable controls to achieve greater overall cybersecurity defense.</p></td></tr><tr><td><p>CIS Controls</p></td><td><p>The CIS Critical Security Controls (CIS Controls) are a prescriptive, prioritized, and simplified set of best practices that you can use to strengthen your cybersecurity posture.</p></td></tr><tr><td><p>CIS Benchmarks</p></td><td><p>CIS Benchmarks from the Center for Internet Security (CIS) are a set of globally recognized and consensus-driven best practices to help security practitioners implement and manage their cybersecurity defenses.</p></td></tr><tr><td><p>NIST Framework</p></td><td><p>The NIST Cybersecurity Framework (NIST CSF) <strong>consists of standards, guidelines, and best practices that help organizations improve their management of cybersecurity risk</strong>. The NIST CSF is designed to be flexible enough to integrate with the existing security processes within any organization, in any industry.</p></td></tr><tr><td><p>Pen Testing</p></td><td><p>Pen (Penetration) Testing is the practice of intentionally challenging the security of a computer system, network, or web application to discover vulnerabilities that an attacker or hacker could exploit.</p></td></tr><tr><td><p>Rootkit</p></td><td><p>A Rootkit is a collection of software tools or a program that gives a hacker remote access to, and control over, a computer or network. Rootkits themselves do not cause direct harm – and there have been legitimate uses for this type of software, such as to provide remote end user support. However, most rootkits open a backdoor on targeted computers for the introduction of malware, viruses, and ransomware, or use the system for further network security attacks. A rootkit is typically installed through a stolen password, or by exploiting system vulnerabilities without the victim’s knowledge. In most cases, rootkits are used in conjunction with other malware to prevent detection by endpoint antivirus software.</p></td></tr><tr><td><p>Social Engineering</p></td><td><p>Social Engineering is an increasingly popular method of gaining access to unauthorized resources by exploiting human psychology and manipulating users – rather than by breaking in or using technical hacking techniques. Instead of trying to find a software vulnerability in a corporate system, a social engineer might send an email to an employee pretending to be from the IT department, trying to trick him into revealing sensitive information. Social engineering is the foundation of spear phishing attacks.</p></td></tr><tr><td><p>Threat Assessment</p></td><td><p>Threat Assessment is a structured process used to identify and evaluate various risks or threats that an organization might be exposed to. Cyber threat assessment is a crucial part of any organization’s risk management strategy and data protection efforts.</p></td></tr><tr><td><p>Threat Hunting</p></td><td><p>Cyber Threat Hunting is an active cyber defense activity where cybersecurity professionals actively search networks to detect and mitigate advanced threats that evade existing security solutions.</p></td></tr><tr><td><p>Threat Intelligence</p></td><td><p>Threat Intelligence, or cyber threat intelligence, is intelligence proactively obtained and used to understand the threats that are targeting the organization. Trojan Trojans are malicious programs that perform actions that are not authorized by the user: they delete, block, modify or copy data, and they disrupt the performance of computers or computer networks. Unlike viruses and worms, Trojans are unable to make copies of themselves or self-replicate.</p></td></tr><tr><td><p>Threat Modeling</p></td><td><p>A threat model is a structured representation of all the information that affects the security of an application. In essence, it is a view of the application and its environment through the lens of security.</p></td></tr><tr><td><p>PASTA</p></td><td><p>PASTA is an acronym that stands for <strong>Process for Attack Simulation and Threat Analysis</strong>. It is a 7-step risk-based threat modeling framework.</p></td></tr><tr><td><p>DREAD</p></td><td><p>The DREAD model quantitatively assesses the severity of a cyber threat using a scaled rating system that assigns numerical values to risk categories.</p></td></tr><tr><td><p>STRIDE</p></td><td><p>The STRIDE threat model is a developer-focused model to identify and classify threats under 6 types of attacks – Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service DoS, and Elevation of privilege.</p></td></tr><tr><td><p>White Hat – Black Hat</p></td><td><p>White hat – Black Hat are terms to describe the ‘good guys’ and ‘bad guys’ in the world of cybercrime. Blackhats are hackers with criminal intentions. White-hats are hackers who use their skills and talents for good and work to keep data safe from other hackers by finding system vulnerabilities that can be fixed.</p></td></tr><tr><td><p>WAF</p></td><td><p>A Web Application Firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service. By inspecting HTTP traffic, it can prevent attacks exploiting a web application’s known vulnerabilities, such as SQL injection, cross-site scripting (XSS), file inclusion, and improper system configuration.</p></td></tr><tr><td><p>Vulnerability</p></td><td><p>Vulnerabilities are weaknesses in software programs that can be exploited by hackers to compromise computers.</p></td></tr><tr><td><p>DevSecOps</p></td><td><p>DevSecOps—short for <em>development, security, </em>and <em>operations</em>—automates the integration of security at every phase of the software development lifecycle, from initial design through integration, testing, deployment, and software delivery.</p></td></tr></tbody></table>

<p>References:</p>
<p><a target="_blank" href="https://www.allot.com/">https://www.allot.com/</a></p>
<p><a target="_blank" href="https://www.bitsight.com/">https://www.bitsight.com/</a></p>
<p><a target="_blank" href="https://www.ibm.com/">https://www.ibm.com/</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool-threats#stride-model">https://learn.microsoft.com/en-us/azure/security/develop/threat-modeling-tool-threats</a></p>
<p><a target="_blank" href="https://www.eit2.com/">https://www.eit2.com/</a></p>
]]></content:encoded></item><item><title><![CDATA[Must have local tools for DevOps]]></title><description><![CDATA[List of Must have tools to increase Collaboration & productivity
1)tmate

For instant live terminal sharing with collegues for collaboration and debugging it together

link:https://tmate.io/


2)asciinema

Record and share terminals

link:https://asc...]]></description><link>https://ministryofdevops.com/must-have-local-tools-for-devops</link><guid isPermaLink="true">https://ministryofdevops.com/must-have-local-tools-for-devops</guid><category><![CDATA[Devops]]></category><category><![CDATA[Developer]]></category><category><![CDATA[development]]></category><category><![CDATA[tools]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Mudit Kumar]]></dc:creator><pubDate>Sun, 03 Apr 2022 11:16:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1648983706025/7Ad2JuavY.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-list-of-must-have-tools-to-increase-collaboration-andamp-productivity">List of Must have tools to increase Collaboration &amp; productivity</h2>
<p>1)tmate</p>
<ul>
<li><p>For instant live terminal sharing with collegues for collaboration and debugging it together</p>
</li>
<li><p>link:https://tmate.io/</p>
</li>
</ul>
<p>2)asciinema</p>
<ul>
<li><p>Record and share terminals</p>
</li>
<li><p>link:https://asciinema.org/</p>
</li>
</ul>
<p>3)asciicast2gif</p>
<ul>
<li><p>Convert asciinema recording to gif</p>
</li>
<li><p>link: https://github.com/asciinema/asciicast2gif</p>
</li>
</ul>
<p>4)FullPagescreenshot</p>
<ul>
<li><p>Capture a screenshot of your current full page without you needing to scroll it down</p>
</li>
<li><p>link: https://chrome.google.com/webstore/detail/gofullpage-full-page-scre/fdpohaocaechififmbbbbbknoalclacl?hl=en</p>
</li>
</ul>
<p>5)mkcert</p>
<ul>
<li><p>If you want to test out applications that need a local CA (Certification authority), this is the right tool.</p>
</li>
<li><p>link: https://github.com/FiloSottile/mkcert</p>
</li>
</ul>
<p>6)LICEcap</p>
<ul>
<li><p>LICEcap can capture an area of your desktop and save it directly to .GIF (for viewing in web browsers, etc).</p>
</li>
<li><p>link:https://www.cockos.com/licecap/</p>
</li>
</ul>
<p><a target="_blank" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=muditcse">Follow on LinkedIn</a></p>
]]></content:encoded></item></channel></rss>