<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Damir Dobric - DEVELOPERS.DE]]></title><description><![CDATA[Software Development Blog with focus on .NET, Windows, Microsoft Azure powered by daenet]]></description><link>https://developers.de/</link><generator>Ghost 1.21</generator><lastBuildDate>Fri, 03 Apr 2026 06:57:18 GMT</lastBuildDate><atom:link href="https://developers.de/author/ddobric/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Office Crash: When Microsoft Word cannot open files on OneDrive, SharePoint & Co.]]></title><description><![CDATA[<div class="kg-card-markdown"><h1 id="whenmicrosoftwordcrasheshowathirdpartyplugincanbringitdown">When Microsoft Word Crashes: How a Third-Party Plugin Can Bring It Down</h1>
<p>Microsoft Word or other Office Application is usually a very stable application. When it starts crashing repeatedly, many users assume the problem lies with Office itself, Windows updates, or corrupted documents. In practice, a very common cause is</p></div>]]></description><link>https://developers.de/2026/02/03/my-office-applications-cannot-open-files-on-onedrive-sharepoint/</link><guid isPermaLink="false">69808948e8c0b11b9c3d615a</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 03 Feb 2026 12:28:18 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="whenmicrosoftwordcrasheshowathirdpartyplugincanbringitdown">When Microsoft Word Crashes: How a Third-Party Plugin Can Bring It Down</h1>
<p>Microsoft Word or other Office Application is usually a very stable application. When it starts crashing repeatedly, many users assume the problem lies with Office itself, Windows updates, or corrupted documents. In practice, a very common cause is <strong>third-party Office plugins</strong> that integrate deeply into Word.<br>
You might blame Microsoft or Windows for this, but as in most cases it is not Microsoft. You update office, but some other plugins are not compatible with the new version. So, what to do?</p>
<p>One frequent example is a crash caused by the <em>Seclore FileSecure</em> Office plugin.</p>
<p>This article explains:</p>
<ul>
<li>How to locate this kind of issue</li>
<li>Why Word crashes even though it is not the real cause</li>
<li>How to safely disable the problematic plugin</li>
</ul>
<p>Note, the same issue can be fixed for all other office applications!</p>
<hr>
<h2 id="thecrashsymptoms">The Crash Symptoms</h2>
<p>Users typically report:</p>
<ul>
<li>Office application cannot open the file at the remote location.</li>
<li>Office application crashing when opening</li>
<li>Repeated crashes</li>
</ul>
<p>In <strong>Windows Event Viewer</strong>, the error often looks like this:</p>
<pre><code class="language-text">Faulting application name: WINWORD.EXE
Faulting module name: Office2016x64Plugin.dll
Exception code: 0xc0000409
Faulting module path: ...\Seclore\FileSecure\Desktop Client\...
</code></pre>
<p>At first glance, it appears that <strong>Microsoft Word</strong> (or other office applicaiton) is at fault — but that is misleading.</p>
<hr>
<h2 id="understandingtherootcause">Understanding the Root Cause</h2>
<p>The most important line in the crash report is:</p>
<pre><code class="language-text">Faulting module name: Office2016x64Plugin.dll
</code></pre>
<p>This DLL belongs to <strong>Seclore</strong>, an enterprise Information Rights Management (IRM) solution that integrates directly into Microsoft Office.</p>
<p>What is happening internally:</p>
<ul>
<li>Word loads the Seclore plugin during startup</li>
<li>The plugin performs low-level operations (file protection, encryption, policy enforcement)</li>
<li>Due to a bug, incompatibility, or outdated version, the plugin triggers a <strong>memory violation</strong></li>
<li>Windows terminates Word immediately to protect system integrity</li>
</ul>
<p>The exception code <code>0xc0000409</code> usually indicates:</p>
<ul>
<li>Stack buffer overrun</li>
<li>Memory corruption</li>
<li>Unsafe or incompatible plugin code</li>
</ul>
<p>In short: <strong>Word crashes because the plugin crashes inside Word’s process</strong>.</p>
<hr>
<h2 id="whythisproblemoftenappearssuddenly">Why This Problem Often Appears Suddenly</h2>
<p>This issue commonly starts after:</p>
<ul>
<li>A <strong>Microsoft Office update</strong></li>
<li>A <strong>Windows update</strong></li>
<li>An update mismatch between Office and the Seclore client</li>
<li>Security software not being updated in sync with Office</li>
</ul>
<p>Reinstalling Office alone usually does <strong>not</strong> fix the issue, because the faulty plugin remains installed.</p>
<hr>
<h2 id="howtodisablethesecloreplugininmicrosoftword">How to Disable the Seclore Plugin in Microsoft Word</h2>
<p>If your organization allows it, disabling the plugin is the fastest way to confirm and resolve the problem.</p>
<h3 id="method1disableviawordoptionsrecommended">Method 1: Disable via Word Options (Recommended)</h3>
<ol>
<li>Open <strong>Microsoft Word</strong></li>
<li>Go to <strong>File → Options</strong></li>
<li>Select <strong>Add-ins</strong></li>
<li>At the bottom, next to <strong>Manage</strong>, choose <strong>COM Add-ins</strong></li>
<li>Click <strong>Go</strong></li>
<li>Locate the <strong>Seclore</strong> add-in (or similar)</li>
<li><strong>Uncheck</strong> the plugin</li>
<li>Click <strong>OK</strong></li>
<li>Restart Word</li>
</ol>
<p>If Word opens normally afterward, the plugin was the cause.</p>
<hr>
<h3 id="method2startwordinsafemodediagnostic">Method 2: Start Word in Safe Mode (Diagnostic)</h3>
<p>This method does not disable the plugin permanently, but it helps confirm the root cause.</p>
<ol>
<li>Press <strong>Win + R</strong></li>
<li>Run:<pre><code class="language-text">winword /safe
</code></pre>
</li>
<li>If Word works correctly in Safe Mode, the issue is <strong>definitely an add-in</strong></li>
</ol>
<hr>
<h3 id="method3updateorreinstallseclorebestlongtermfix">Method 3: Update or Reinstall Seclore (Best Long-Term Fix)</h3>
<p>If the plugin is required by company policy:</p>
<ul>
<li>Update the <strong>Seclore Desktop Client</strong> to the latest version</li>
<li>If already updated, reinstall it</li>
<li>Verify compatibility with the current Office build</li>
</ul>
<p>In managed corporate environments, this step should be handled by IT support.</p>
<hr>
<h2 id="keytakeaway">Key Takeaway</h2>
<p>When Office APplications crashe:</p>
<ul>
<li>The <em>faulting application</em> is not always the <em>root cause</em></li>
<li>Third-party Office plugins run inside Word’s process</li>
<li>A single buggy DLL can crash the entire application</li>
</ul>
<p>In this case, <strong>Word is the victim — not the problem</strong>.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2026/2/21127_carshin%20word%20plugin.png" alt="21127_carshin%20word%20plugin"></p>
</div>]]></content:encoded></item><item><title><![CDATA[?Migrating All Azure Resources Between Subscriptions with Azure CLI & PowerShell?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Migrating resources between Azure subscriptions is a <strong>common but risky task</strong>. Whether you’re reorganizing tenants, separating billing, or preparing for a handover, doing this manually is slow and error-prone.<br>
The issue here is that Azure Portal does not provide an option to migrate all resources at once.</p>
<p>For this</p></div>]]></description><link>https://developers.de/2026/01/23/migrating-all-azure-resources-between-subscriptions-with-azure-cli/</link><guid isPermaLink="false">69723816c62a6e11f4d8554a</guid><category><![CDATA[Azure]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 23 Jan 2026 09:57:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Migrating resources between Azure subscriptions is a <strong>common but risky task</strong>. Whether you’re reorganizing tenants, separating billing, or preparing for a handover, doing this manually is slow and error-prone.<br>
The issue here is that Azure Portal does not provide an option to migrate all resources at once.</p>
<p>For this reason, I created a <strong>PowerShell + Azure CLI script</strong> that <strong>automatically migrates all resource groups and their resources</strong> from one subscription to another.</p>
<h2 id="whatthisscriptdoes">? What This Script Does</h2>
<p>The script performs the following steps:</p>
<ol>
<li>Connects to a <strong>source Azure subscription</strong></li>
<li>Retrieves <strong>all resource groups</strong></li>
<li>Switches to a <strong>target subscription</strong></li>
<li>Creates missing resource groups</li>
<li>Moves <strong>all supported resources</strong> to the target subscription</li>
<li>Logs success and failure clearly</li>
</ol>
<p>It uses native <strong>Azure Resource Manager (ARM) move operations</strong>, meaning:</p>
<ul>
<li>Resource IDs remain intact</li>
<li>No redeployment is required</li>
<li>Downtime is minimized (but not zero)</li>
</ul>
<hr>
<h2 id="prerequisites">? Prerequisites</h2>
<p>Before running the script, ensure:</p>
<ul>
<li>Azure CLI is installed (<code>az version</code>)</li>
<li>You are logged in (<code>az login</code>)</li>
<li>You have <strong>Owner</strong> or <strong>Contributor</strong> permissions on both subscriptions</li>
<li>Resources are in a <strong>movable state</strong></li>
</ul>
<blockquote>
<p>⚠️ Not all Azure resources support cross-subscription moves (e.g., classic resources, some networking dependencies).</p>
</blockquote>
<hr>
<h2 id="scriptconfiguration">⚙️ Script Configuration</h2>
<p>Update the following values before running:</p>
<pre><code class="language-powershell">$sourceSubId = &quot;***&quot;     # Source subscription ID
$targetSubId = &quot;***&quot;     # Target subscription ID
$location = &quot;westeurope&quot; # Location for new resource groups
</code></pre>
<p>The <code>$location</code> value is only used when <strong>creating missing resource groups</strong> in the target subscription.</p>
<hr>
<p><img src="https://images.unsplash.com/photo-1515879218367-8466d910aaa4?q=80&amp;w=1600&amp;auto=format&amp;fit=crop" alt="PowerShell automation terminal"></p>
<hr>
<h2 id="scriptwalkthrough">? Script Walkthrough</h2>
<h3 id="1switchtosourcesubscription">1️⃣ Switch to Source Subscription</h3>
<pre><code class="language-powershell">az account set --subscription $sourceSubId
</code></pre>
<p>This ensures all resource discovery happens in the <strong>correct source context</strong>.</p>
<hr>
<h3 id="2retrieveallresourcegroups">2️⃣ Retrieve All Resource Groups</h3>
<pre><code class="language-powershell">$resourceGroups = az group list --query &quot;[].name&quot; -o tsv
</code></pre>
<p>This fetches <strong>every resource group name</strong> in the source subscription.</p>
<hr>
<h3 id="3ensureresourcegroupsexistintarget">3️⃣ Ensure Resource Groups Exist in Target</h3>
<pre><code class="language-powershell">az group exists --name $rg
</code></pre>
<p>If the resource group doesn’t exist in the target subscription, it is automatically created:</p>
<pre><code class="language-powershell">az group create --name $rg --location $location
</code></pre>
<p>✔ Prevents failures during resource moves<br>
✔ Keeps naming consistent</p>
<hr>
<h3 id="4collectresourceids">4️⃣ Collect Resource IDs</h3>
<pre><code class="language-powershell">$ids = az resource list --resource-group $rg --query &quot;[].id&quot; -o tsv
</code></pre>
<p>Azure requires <strong>resource IDs</strong> for move operations, not names.</p>
<hr>
<h3 id="5moveresourcesacrosssubscriptions">5️⃣ Move Resources Across Subscriptions</h3>
<pre><code class="language-powershell">az resource move `
  --destination-group $rg `
  --destination-subscription-id $targetSubId `
  --ids $ids
</code></pre>
<p>This is the <strong>core operation</strong>:</p>
<ul>
<li>Moves resources</li>
<li>Keeps them in the same resource group name</li>
<li>Preserves configuration and metadata</li>
</ul>
<hr>
<h3 id="6errorhandlinglogging">6️⃣ Error Handling &amp; Logging</h3>
<pre><code class="language-powershell">if ($LASTEXITCODE -eq 0) {
    Write-Host &quot;SUCCESS&quot;
} else {
    Write-Host &quot;FAILED - see error above&quot;
}
</code></pre>
<p>Failures usually occur due to:</p>
<ul>
<li>Unsupported resource types</li>
<li>Dependency constraints</li>
<li>Resources spanning multiple resource groups</li>
</ul>
<hr>
<p><img src="https://images.unsplash.com/photo-1556155092-490a1ba16284?q=80&amp;w=1600&amp;auto=format&amp;fit=crop" alt="Azure error diagnostics dashboard"></p>
<hr>
<h2 id="importantlimitations">⚠️ Important Limitations</h2>
<p>Be aware of these Azure constraints:</p>
<p>❌ Some resources <strong>cannot be moved</strong></p>
<ul>
<li>Classic resources</li>
<li>Certain App Service plans</li>
<li>Managed identities with dependencies</li>
</ul>
<p>❌ Resources <strong>must move together</strong></p>
<ul>
<li>VNets + subnets</li>
<li>NICs + VMs</li>
<li>Disks + VMs</li>
</ul>
<p>✔ Azure will <strong>block the move</strong> if dependencies are violated</p>
<hr>
<h2 id="bestpracticesbeforerunning">✅ Best Practices Before Running</h2>
<p>✔ Test on a <strong>single resource group first</strong><br>
✔ Export ARM templates as a backup<br>
✔ Run during a <strong>maintenance window</strong><br>
✔ Validate networking dependencies<br>
✔ Monitor activity logs during execution</p>
<hr>
<h2 id="whenshouldyouusethisscript">? When Should You Use This Script?</h2>
<p>This approach is ideal for:</p>
<ul>
<li>Subscription consolidation</li>
<li>Tenant separation</li>
<li>Environment restructuring (Dev → Prod)</li>
<li>M&amp;A cloud migrations</li>
<li>Billing realignment</li>
</ul>
<hr>
<h2 id="finalthoughts">? Final Thoughts</h2>
<p>This script provides a <strong>clean, repeatable, and safe</strong> way to migrate Azure resources at scale using native tooling from <strong>Microsoft Azure</strong>.</p>
<p>It’s not magic—but with proper preparation, it can save <strong>hours or days of manual work</strong>.</p>
<p>If you found this useful, feel free to:<br>
? Like<br>
? Repost<br>
? Share your migration war stories</p>
<p>Happy migrating ☁️?</p>
<p>----- SCRIPT ------</p>
<pre><code>Write-Host &quot;======================================&quot;
$sourceSubId = &quot;***&quot;
$targetSubId = &quot;***&quot;
$location = &quot;westeurope&quot;               

Write-Host $sourceSubId
Write-Host $targetSubId

# Switch to source subscription
Write-Host Setting + $targetSubId
az account set --subscription $sourceSubId

# Get list of all resource group names
$resourceGroups = az group list --query &quot;[].name&quot; -o tsv

foreach ($rg in $resourceGroups) {
    Write-Host &quot;======================================&quot; -ForegroundColor Cyan
    Write-Host &quot;Processing source RG: $rg&quot; -ForegroundColor Yellow

    # Check if same-name RG already exists in TARGET subscription
    az account set --subscription $targetSubId

    $exists = az group exists --name $rg --output tsv

    if ($exists -eq &quot;false&quot;) {
        Write-Host &quot;  Creating target RG '$rg' in location $location ...&quot; -ForegroundColor Green
        az group create --name $rg --location $location --subscription $targetSubId
    } else {
        Write-Host &quot;  Target RG '$rg' already exists - skipping creation&quot; -ForegroundColor Green
    }

    # Switch back to source to list resources
    az account set --subscription $sourceSubId

    # Get resource IDs from source RG
    $ids = az resource list --resource-group $rg --query &quot;[].id&quot; -o tsv

    if ($ids) {
        Write-Host &quot;  Moving resources from '$rg' ...&quot; -ForegroundColor Yellow
        az resource move `
            --destination-group $rg `
            --destination-subscription-id $targetSubId `
            --ids $ids

        if ($LASTEXITCODE -eq 0) {
            Write-Host &quot;  SUCCESS&quot; -ForegroundColor Green
        } else {
            Write-Host &quot;  FAILED - see error above. Often due to dependencies or unsupported resource types.&quot; -ForegroundColor Red
        }
    } else {
        Write-Host &quot;  No resources found in '$rg' - skipping move&quot; -ForegroundColor Gray
    }
}

Write-Host &quot;======================================&quot; -ForegroundColor Cyan
Write-Host &quot;All resource groups processed.&quot; -ForegroundColor White
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Cache Strategy Considerations and the Role of Redis]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Before deploying Redis, it's important to evaluate whether it is truly needed for the application in question.</p>
<p>Redis is typically used in scenarios where an application must handle a very high volume of concurrent users often in the range of hundreds of thousands. In our case, this level of demand</p></div>]]></description><link>https://developers.de/2025/06/29/cache-strategy-considerations-and-the-role-of-redis/</link><guid isPermaLink="false">6861332a34452217444e6ec1</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Sun, 29 Jun 2025 12:43:36 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Before deploying Redis, it's important to evaluate whether it is truly needed for the application in question.</p>
<p>Redis is typically used in scenarios where an application must handle a very high volume of concurrent users often in the range of hundreds of thousands. In our case, this level of demand does not apply.</p>
<p>If caching is required, we generally have two options:</p>
<h4 id="inmemorycaching">In-Memory Caching</h4>
<p>This is the fastest option but has limitations in clustered environments (e.g., state synchronization, failover).</p>
<h4 id="dedicatedcacheservices">Dedicated Cache Services</h4>
<p>These include Redis, Memcached, and others, which are external systems accessed over the network.</p>
<p>When considering a cache service, the next logical question is: Which one should we use?</p>
<p>All services, whether EMail server, SQL databases, MongoDB, or Redis are accessed over specific protocols (e.g., TDS for SQL Server, TCP for MongoDB or Redis). Performance comparisons often assume Redis is the fastest option, but that assumption can be misleading.</p>
<p><strong>SQL Server</strong>: Often provides the fastest response times, especially for indexed lookups on structured data. Believe me or not, the JETDB (MSAccess) is the fastest DB as long the single user is connected. :)</p>
<p><strong>MongoDB</strong>: Offers strong performance, particularly in distributed cloud-native environments like Azure Cosmos DB.</p>
<p><strong>Redis</strong>: While not inherently the fastest, it excels at horizontal scalability thanks to its built-in partitioning and protocol-level load balancing. This makes Redis suitable for very high-scale scenarios, where clients need to be routed directly to the node holding the relevant data.</p>
<p>So why is Redis frequently used?</p>
<p>Mostly, a lack of Architectural Insight: Many teams adopt Redis without properly researching if it's the best fit.</p>
<p>Scalability Needs: Redis shines in systems that require linear scaling across many users and nodes, which is often not the case in smaller or mid-sized applications.</p>
<h3 id="conclusion">Conclusion</h3>
<p>For systems with fewer than ~1,000 (this needs to be measured for every application!!) concurrent users, it is often more efficient and maintainable to leverage SQL tables directly, avoiding the added complexity and operational overhead of Redis.<br>
I'm not saying, use SQL or MS Access for cache in general. I'm saying, it is smart to understand problem and do required performance measurment before making decisions. Believe me, you will be supried.</p>
<p>Redis is often used as a synonym for caching, just as Docker and Kubernetes are commonly associated with microservices. However, none of these associations are entirely accurate in a generalized context.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Model Performance]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Evaluating large language models (LLMs) is becoming increasingly difficult. One major challenge is test set contamination, where benchmark questions unintentionally end up in a model’s training data—skewing results and making once-reliable benchmarks quickly outdated. While newer benchmarks try to avoid this by using crowdsourced questions or LLM-based evaluations,</p></div>]]></description><link>https://developers.de/2025/04/18/model-performance/</link><guid isPermaLink="false">6802768a2531c70d884c2f62</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[GPT]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 18 Apr 2025 16:18:12 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Evaluating large language models (LLMs) is becoming increasingly difficult. One major challenge is test set contamination, where benchmark questions unintentionally end up in a model’s training data—skewing results and making once-reliable benchmarks quickly outdated. While newer benchmarks try to avoid this by using crowdsourced questions or LLM-based evaluations, these methods come with their own problems, like bias and difficulty in judging complex tasks.</p>
<p>That’s where <a href="https://openreview.net/forum?id=sKYHBTAxVa">LiveBench</a> comes in.</p>
<p>LiveBench is some kind of a benchmark designed to address these issues head-on. It features regularly updated questions sourced from fresh content—like math competitions, academic papers, and news articles—and scores answers automatically using objective ground-truth values. It covers a wide range of tough tasks, including math, coding, reasoning, and instruction following, pushing LLMs to their limits.</p>
<p>With questions refreshed monthly and difficulty scaling over time, LiveBench is built not just for today’s models but for the next wave of AI breakthroughs. Top models currently score below 70%, showing just how challenging—and necessary—this benchmark is.</p>
<p>I put toghether few benchmarks.</p>
<h3 id="modelperformancebyaveragescore">Model Performance by Average score.</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/181558_output.png" alt="181558_output"></p>
<h3 id="modelreasoningperformance">Model Reasoning Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/181616_output%20(4).png" alt="181616_output%20(4)"></p>
<h3 id="modelcodingperformance">Model Coding Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/18166_output%20(2).png" alt="18166_output%20(2)"></p>
<h3 id="modellanguageperformance">Model Language Performance</h3>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/18169_output%20(3).png" alt="18169_output%20(3)"></p>
<h3 id="recap">Recap</h3>
<p>I have created all diagrams by using GPT-4o based on data obtained from <a href="https://livebench.ai/#/?Coding=a">https://livebench.ai/#/?Coding=a</a><br>
If some of models are missing in diagrams, please forgive me (It was omitted by GPT Diagram Generation :)).</p>
</div>]]></content:encoded></item><item><title><![CDATA[Recommended AI Sessions]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Dear all, here is the list of recommended resources related to AI Sessions.<br>
It is a great foundation to start learning about AI.</p>
<ol>
<li>
<p>BRK440: Getting started with Generative AI in Azure<br>
<a href="https://github.com/microsoft/aitour-generative-ai-in-azure">https://github.com/microsoft/aitour-generative-ai-in-azure</a></p>
</li>
<li>
<p>BRK441: Build AI Solutions with Azure AI Foundry<br>
<a href="https://github.com/microsoft/aitour-concept-to-creation-ai-studio">https://github.com/microsoft/aitour-concept-to-creation-ai-studio</a></p>
</li>
<li>
<p>BRK443:</p></li></ol></div>]]></description><link>https://developers.de/2025/03/25/recommended-ai-sessions/</link><guid isPermaLink="false">67e01ff4e62fcc1d54ff428c</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[GPT]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 25 Mar 2025 07:07:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Dear all, here is the list of recommended resources related to AI Sessions.<br>
It is a great foundation to start learning about AI.</p>
<ol>
<li>
<p>BRK440: Getting started with Generative AI in Azure<br>
<a href="https://github.com/microsoft/aitour-generative-ai-in-azure">https://github.com/microsoft/aitour-generative-ai-in-azure</a></p>
</li>
<li>
<p>BRK441: Build AI Solutions with Azure AI Foundry<br>
<a href="https://github.com/microsoft/aitour-concept-to-creation-ai-studio">https://github.com/microsoft/aitour-concept-to-creation-ai-studio</a></p>
</li>
<li>
<p>BRK443: Build your code-first app with Azure AI Agent Service<br>
<a href="https://github.com/microsoft/aitour-azure-openai-assistants">https://github.com/microsoft/aitour-azure-openai-assistants</a></p>
</li>
<li>
<p>BRK444: Getting started with AI Agents in Azure<br>
<a href="https://github.com/microsoft/aitour-getting-started-with-ai-agents">https://github.com/microsoft/aitour-getting-started-with-ai-agents</a></p>
</li>
<li>
<p>BRK450: Prompty, AI Studio and practical E2E development<br>
<a href="https://github.com/microsoft/aitour-llmops-with-gen-ai-tools">https://github.com/microsoft/aitour-llmops-with-gen-ai-tools</a></p>
</li>
<li>
<p>BRK451: Code-first GenAIOps from prototype to production<br>
<a href="https://github.com/microsoft/aitour-llmops-with-gen-ai-tools">https://github.com/microsoft/aitour-llmops-with-gen-ai-tools</a></p>
</li>
<li>
<p>BRK452: Operationalize AI responsibly with Azure AI Studio<br>
<a href="https://github.com/microsoft/aitour-operate-ai-responsibly-with-ai-studio">https://github.com/microsoft/aitour-operate-ai-responsibly-with-ai-studio</a></p>
</li>
<li>
<p>BRK453: Explore cutting-edge models: LLMs, SLMs and more<br>
<a href="https://github.com/microsoft/aitour-exploring-cutting-edge-models">https://github.com/microsoft/aitour-exploring-cutting-edge-models</a></p>
</li>
</ol>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2025/4/2411_4c313b96-61cf-4ec7-8dfd-ed97a87c7d06.png" alt="2411_4c313b96-61cf-4ec7-8dfd-ed97a87c7d06"></p>
</div>]]></content:encoded></item><item><title><![CDATA[(Iterative) Retrieval-Augmented Generation]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Right now, it seems that most of the community if fixed on the RAG (excluding Prompt Engineering). However, there is a technique called <strong>Iterative RAG</strong> (Ma et al., 2023; Li et al., 2024; Chan et al., 2024; Shi et al., 2024).</p>
<p>This is a more advanced approach in natural language</p></div>]]></description><link>https://developers.de/2025/01/14/iterative-retrieval-augmented-generation/</link><guid isPermaLink="false">67854f8104230a1e58966b2e</guid><category><![CDATA[LLM]]></category><category><![CDATA[GPT]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 14 Jan 2025 09:01:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Right now, it seems that most of the community if fixed on the RAG (excluding Prompt Engineering). However, there is a technique called <strong>Iterative RAG</strong> (Ma et al., 2023; Li et al., 2024; Chan et al., 2024; Shi et al., 2024).</p>
<p>This is a more advanced approach in natural language processing and generative AI that enhances the interaction between information retrieval and generation by refining outputs through multiple iterations.</p>
<h2 id="1whatisrag">1. What is RAG?</h2>
<p>RAG integrates two main components:</p>
<ul>
<li><strong>Retriever</strong>: Finds relevant documents or data from an external knowledge base. This is typically the task of some connector.</li>
<li><strong>Generator</strong>: Generates a response or output based on the retrieved information.This is covered by the model itself.</li>
</ul>
<p>The aim is to use external knowledge for factually grounded and contextually relevant outputs.The data can be stored in some database, can be retrieved from an external service, read from documents etc.</p>
<h2 id="2whatmakesiterativeragdifferent">2. What Makes Iterative RAG Different?</h2>
<p>Iterative RAG improves upon standard RAG by performing multiple cycles of refinement. It iteratively improves the quality of output by revisiting retrieval and generation steps.</p>
<h3 id="iterationprocessandfeedbackloop">Iteration Process and Feedback Loop:</h3>
<ol>
<li><strong>Initial Retrieval</strong>: Retrieve a set of documents or data points. (Same as RAG).</li>
<li><strong>Generation</strong>: Produce an output based on the retrieved information.(Same as RAG)</li>
<li><strong>Feedback Loop</strong>: Analyze the output to identify gaps or areas for improvement.</li>
<li><strong>Refinement Retrieval</strong>: Use feedback to refine the search for better data.(Same as RAG)</li>
<li><strong>Regeneration</strong>: Generate a new output based on the refined retrieval.(Same as RAG)</li>
</ol>
<p>This loop continues until:</p>
<ul>
<li>The output meets a predefined quality threshold, or</li>
<li>A maximum number of iterations is reached.</li>
</ul>
<h2 id="3advantagesofiterativerag">3. Advantages of Iterative RAG</h2>
<ul>
<li><strong>Improved Accuracy</strong>: Addresses errors or missing information through iterations.</li>
<li><strong>Contextual Relevance</strong>: Refines context to better align the final response with the query.</li>
<li><strong>Dynamic Adaptation</strong>: Adjusts retrieval and generation strategies dynamically.</li>
</ul>
<p>This process seems to be evolved over time. I guess, that reasoning might be a bit inspired by iterative process of the feedback introduced by the <em>Iterative RAG</em>.</p>
<h2 id="4applications">4. Applications</h2>
<ul>
<li><strong>Question Answering</strong>: Produces detailed, factually accurate answers by refining retrieved knowledge.</li>
<li><strong>Document Summarization</strong>: Ensures summaries include all relevant information.</li>
<li><strong>Conversational AI</strong>: Enhances dialogue coherence by refining context and revisiting prior responses.</li>
</ul>
<h2 id="5challenges">5. Challenges</h2>
<ul>
<li><strong>Computational Cost</strong>: Iterations increase latency and resource usage.</li>
<li><strong>Optimization Complexity</strong>: Balancing retrieval and generation across iterations can be very tricky task.</li>
<li><strong>Risk of Overfitting</strong>: Excessive iterations might lead to overly specific or biased outputs.</li>
</ul>
<h3 id="recap">Recap</h3>
<p>Iterative RAG is a significant advancement in combining retrieval and generation systems, offering a robust way to handle complex queries and generate high-quality, accurate responses.Although RAG methods achieve strong performance on multi-hop tasks like HotpotQA, there are huge limitations.<br>
For example, RAG is chunk-based and it struggles with knowledge-intensive tasks (Wang et al., 2024a), because chunks contain excessive text noise and do not capture the relation betwen information. With this limitation LLMs cannot effectively use augmented knowledge.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building Intelligent Workflows with Semantic Kernel Pipelines]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When it comes to automating workflows, breaking down complex tasks into smaller, modular steps can make the process more efficient and maintainable. Semantic Kernel (SK) provides a powerful way to achieve this through pipelines. In this post, we’ll explore how to create and execute a pipeline that processes a</p></div>]]></description><link>https://developers.de/2024/12/30/building-intelligent-workflows-with-semantic-kernel-pipelines/</link><guid isPermaLink="false">6772915fba29d61118ffacab</guid><category><![CDATA[LLM]]></category><category><![CDATA[.NET]]></category><category><![CDATA[GPT]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 30 Dec 2024 12:33:27 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When it comes to automating workflows, breaking down complex tasks into smaller, modular steps can make the process more efficient and maintainable. Semantic Kernel (SK) provides a powerful way to achieve this through pipelines. In this post, we’ll explore how to create and execute a pipeline that processes a number through parsing, arithmetic, truncation, and humanization. Pleae note that the SK pipeline described in this example is not the state-full machine designed for long running processes. If you want to leverage such scenarios, I recommend using Azure Durable Functions.</p>
<h3 id="whyusesemantickernelpipelines">Why Use Semantic Kernel Pipelines?</h3>
<p>Semantic Kernel pipelines allow you to:</p>
<p>Modularize functionality into reusable components.<br>
Chain together functions to handle complex workflows.<br>
Integrate with AI capabilities such as prompt-based generation.<br>
Let’s dive into a practical example where we build a pipeline to take a string, process it numerically, and then convert it into a spelled-out English phrase.</p>
<h3 id="theworkflow">The Workflow</h3>
<p>In this example, I use the pipeline to solve the following problem:</p>
<ol>
<li>Parse a string representation of a number (e.g., &quot;123.456&quot;) into a double.</li>
<li>Multiply the double by another double (e.g., 78.90).</li>
<li>Truncate the resulting value to an integer.</li>
<li>Convert the integer into its English word representation (e.g., &quot;nine thousand seven hundred forty&quot;).</li>
</ol>
<pre><code class="language-csharp">    public async Task DemoPipelineAsync()
    {
        IKernelBuilder builder = Kernel.CreateBuilder();
        builder.AddOpenAIChatCompletion(
            TestConfiguration.OpenAI.ChatModelId,
            TestConfiguration.OpenAI.ApiKey);
        builder.Services.AddLogging(c =&gt; c.AddConsole().SetMinimumLevel(LogLevel.Trace));
        Kernel kernel = builder.Build();

            KernelFunction parseDouble = KernelFunctionFactory.CreateFromMethod((string s) =&gt; double.Parse(s, CultureInfo.InvariantCulture), &quot;parseDouble&quot;);
            KernelFunction multiplyByN = KernelFunctionFactory.CreateFromMethod((double i, double n) =&gt; i * n, &quot;multiplyByN&quot;);
            KernelFunction truncate = KernelFunctionFactory.CreateFromMethod((double d) =&gt; (int)d, &quot;truncate&quot;);
            KernelFunction humanize = KernelFunctionFactory.CreateFromPrompt(new PromptTemplateConfig()
            {
                Template = &quot;Spell out this number in English: {{$number}}&quot;,
                InputVariables = [new() { Name = &quot;number&quot; }],
            });
            KernelFunction pipeline = KernelFunctionCombinators.Pipe([parseDouble, multiplyByN, truncate, humanize], &quot;pipeline&quot;);

            KernelArguments args = new()
            {
                [&quot;s&quot;] = &quot;123.456&quot;,
                [&quot;n&quot;] = (double)78.90,
            };

            // - The parseInt32 function will be invoked, read &quot;123.456&quot; from the arguments, and parse it into (double)123.456.
            // - The multiplyByN function will be invoked, with i=123.456 and n=78.90, and return (double)9740.6784.
            // - The truncate function will be invoked, with d=9740.6784, and return (int)9740, which will be the final result.
            Console.WriteLine(await pipeline.InvokeAsync(kernel, args));
}
</code></pre>
<h4 id="creatingthepipeline">Creating the Pipeline</h4>
<p>Step 1: Define the Functions<br>
We start by defining individual functions for each step in the workflow. Using Semantic Kernel’s KernelFunctionFactory, we create these modular functions:</p>
<p>Parsing a String into a Double: Converts the string &quot;123.456&quot; into the numeric value 123.456.<br>
Multiplication: Multiplies the parsed number by a given multiplier.<br>
Truncation: Truncates the result to an integer.<br>
Humanization: Converts the integer into a spelled-out English string using a prompt-based function.</p>
<p>Step 2: Combine Functions into a Pipeline<br>
With the functions ready, we use KernelFunctionCombinators.Pipe to chain them together into a pipeline. The output of one function feeds directly into the next, ensuring a seamless data flow.</p>
<p>Step 3: Provide Input Arguments<br>
The pipeline takes input in the form of KernelArguments. For our example, we provide:</p>
<p>&quot;123.456&quot; as the string to parse.<br>
78.90 as the multiplier.</p>
<p>Step 4: Execute the Pipeline<br>
Finally, the pipeline is invoked with the input arguments. Each function is executed sequentially, producing the final human-readable result.</p>
<h3 id="wrapup">Wrap-up</h3>
<p>Semantic Kernel pipelines make it easy to build intelligent workflows that combine traditional logic with AI capabilities. Whether you’re processing numbers, analyzing text, or orchestrating complex tasks, pipelines offer a structured and efficient approach to solving problems.</p>
<p>If you’re looking to build smarter applications for the new software era, try Semantic Kernel! With a little creativity, the possibilities are endless.</p>
</div>]]></content:encoded></item><item><title><![CDATA[How to calculate the Cosine Similarity in C#?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Cosine similarity measures the cosine of the angle between two non-zero vectors in an n-dimensional space. Its value ranges from -1 to 1:</p>
<ul>
<li><strong>A cosine similarity of 1</strong> implies that the vectors are identical.</li>
<li><strong>A cosine similarity of 0</strong> implies that the vectors are orthogonal (no similarity).</li>
<li><strong>A cosine similarity</strong></li></ul></div>]]></description><link>https://developers.de/2024/12/09/how-to-calculate-the-cosine-similarity-in-c/</link><guid isPermaLink="false">6755a9f8e06b310bdc3dc9ba</guid><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 09 Dec 2024 10:32:00 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2024/12/81426_Designer.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81426_Designer.png" alt="How to calculate the Cosine Similarity in C#?"><p>Cosine similarity measures the cosine of the angle between two non-zero vectors in an n-dimensional space. Its value ranges from -1 to 1:</p>
<ul>
<li><strong>A cosine similarity of 1</strong> implies that the vectors are identical.</li>
<li><strong>A cosine similarity of 0</strong> implies that the vectors are orthogonal (no similarity).</li>
<li><strong>A cosine similarity of -1</strong> implies that the vectors are diametrically opposed.</li>
</ul>
<p>In the context of this post, the calculation has following partial parts:</p>
<ul>
<li><strong>Dot Product</strong>: This is calculated by multiplying corresponding components of the two vectors and summing these products.</li>
<li><strong>Magnitude</strong>: The magnitude (or length) of each vector is computed as the square root of the sum of the squares of its components.</li>
<li><strong>Dividing the Dot Product by the Product of the Magnitudes</strong>: This gives the cosine of the angle between the two vectors, which serves as the similarity measure. The more this value approaches 1, the closer the vectors are aligned.</li>
</ul>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81423_cosine.png" alt="How to calculate the Cosine Similarity in C#?"></p>
<p>Following method compares two vectors of the same dimension and calculates the cosine similarity as used inside Large Language Models.</p>
<pre><code class="language-csharp">  /// &lt;summary&gt;
  /// Calculates the cosine similarity.
  /// &lt;/summary&gt;
  /// &lt;param name=&quot;embedding1&quot;&gt;&lt;/param&gt;
  /// &lt;param name=&quot;embedding2&quot;&gt;&lt;/param&gt;
  /// &lt;returns&gt;&lt;/returns&gt;
  /// &lt;exception cref=&quot;ArgumentException&quot;&gt;&lt;/exception&gt;
  public double CalculateSimilarity(float[] embedding1, float[] embedding2)
  {
      if (embedding1.Length != embedding2.Length)
      {
          return 0;
          //throw new ArgumentException(&quot;embedding must have the same length.&quot;);
      }

      double dotProduct = 0.0;
      double magnitude1 = 0.0;
      double magnitude2 = 0.0;

      for (int i = 0; i &lt; embedding1.Length; i++)
      {
          dotProduct += embedding1[i] * embedding2[i];
          magnitude1 += Math.Pow(embedding1[i], 2);
          magnitude2 += Math.Pow(embedding2[i], 2);
      }

      magnitude1 = Math.Sqrt(magnitude1);
      magnitude2 = Math.Sqrt(magnitude2);

      if (magnitude1 == 0.0 || magnitude2 == 0.0)
      {
          throw new ArgumentException(&quot;embedding must not have zero magnitude.&quot;);
      }

      double cosineSimilarity = dotProduct / (magnitude1 * magnitude2);

      return cosineSimilarity;
  }
</code></pre>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/12/81421_Designer.png" alt="How to calculate the Cosine Similarity in C#?"></p>
<p>Visit: <a href="https://daenet.com">https://daenet.com</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[DevOps issue when building NUGET package with .NET application]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When working with .NET and Azure DevOps, we encountered an interesting issue. The pipeline failed, and the log does not show any meaningful information. The only issue in the log was this one:</p>
<pre><code>&quot;D:\a\1\s\src\YOURPROJECT.Api.csproj&quot; (pack target) (1:7) -&gt;
       (GenerateNuspec</code></pre></div>]]></description><link>https://developers.de/2024/05/24/devops-issue-when-building/</link><guid isPermaLink="false">66503a4c9a1d2d16acce5376</guid><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Fri, 24 May 2024 07:28:45 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When working with .NET and Azure DevOps, we encountered an interesting issue. The pipeline failed, and the log does not show any meaningful information. The only issue in the log was this one:</p>
<pre><code>&quot;D:\a\1\s\src\YOURPROJECT.Api.csproj&quot; (pack target) (1:7) -&gt;
       (GenerateNuspec target) -&gt; 
         C:\hostedtoolcache\windows\dotnet\sdk\8.0.300\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(221,5): error NU5026: The file 'D:\a\1\s\src\YOURPROJECT\bin\release\net8.0\YOURPROJECT.dll' to be packed was not found on disk. 
</code></pre>
<p>The reson is that we activated inside the .csproj file autogeneration of the package:</p>
<pre><code>&lt;GeneratePackageOnBuild&gt;True&lt;/GeneratePackageOnBuild&gt;
</code></pre>
<p>This is not supported within the Azure DevOps pipeline. When discussing unsupported features, you should also be aware of the following. If your project file performs any file copy operation, that might be an issue for the pipeline.</p>
<pre><code>&lt;Target Name=&quot;CopyPackage&quot; AfterTargets=&quot;Pack&quot;&gt;
	&lt;Copy SourceFiles=&quot;$(OutputPath)..\$(PackageId).$(PackageVersion).nupkg&quot; DestinationFolder=&quot;$(SolutionDir)..\nuget&quot; /&gt;
&lt;/Target&gt;
</code></pre>
<p>Hope this helps.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Modular Layered Architecture of Backend Applications]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In the world of backend software development, the architecture you choose can greatly impact the flexibility, scalability, and usability of your applications. One such practical and efficient architecture is the modular layered architecture.</p>
<p>The modular layered architecture breaks down an application into separate modules - each with specific functions. The</p></div>]]></description><link>https://developers.de/2024/03/27/modular-layered-architecture-of-backend-applications/</link><guid isPermaLink="false">66046f6e7e14e8140cc91f0e</guid><category><![CDATA[Azure]]></category><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Wed, 27 Mar 2024 19:15:37 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2024/3/272155_blog%20modules.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2024/3/272155_blog%20modules.png" alt="Modular Layered Architecture of Backend Applications"><p>In the world of backend software development, the architecture you choose can greatly impact the flexibility, scalability, and usability of your applications. One such practical and efficient architecture is the modular layered architecture.</p>
<p>The modular layered architecture breaks down an application into separate modules - each with specific functions. The higher level of organization facilitates a cleaner, more maintainable codebase. In this architecture, the Application Domain or API layer generally holds the reins - it knows the entirety or at least the crux of what the application is supposed to do.</p>
<p>Consider an example, a command like dealing with some hardware</p>
<p><code>api.SwitchOnLight(green);</code></p>
<p>or dealing with a vector database</p>
<p><code>api.UpsertDataSourceAsync(string context, string url);</code>.</p>
<p>Here, the API knows about the functions it's supposed to perform, that is, turn on the light to the green color or update an existing vector in the vector database.</p>
<p>When the API is requested to perform these operations, it communicates with underlying modules to carry out the task. For anything related to database operations, there comes the Data Access Layer (DAL). This concept is consistent with the Repository Pattern as interpreted by developers.</p>
<p>However, a question arises: what about when the API has to deal with external systems like a lighting or a hardware system? Here, we need an architecture that seamlessly integrates all kinds of external services - a more specialized layer such as a Hardware Access Layer (HAL) for hardware interaction along with DAL for database interaction. So, we rather talk about Service Layers than Repository Pattern, which is a special case in more simple applications.</p>
<p>Peek into the realms of Windows, and you may encounter HAL - an age-old concept that continues to deliver. From the perspective of the API, to switch on the light, the code might look like this:</p>
<pre><code class="language-csharp">SwitchOnLight(color)
{
   _hal.SendMessage(new Message{ clr = color, intensity = default});
}
</code></pre>
<p>The HAL implementations include HttpAccessLayer, ZigbeeAccessLayer, etc., each designed to communicate effectively with a particular set of hardware. The only thing they need to know is how to speak the hardware's language, not anything specific about the application.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2024/3/272156_blog%20modules.png" alt="Modular Layered Architecture of Backend Applications"></p>
<p>However, note that a design flaw often seen is letting the DAL or HAL know too much about our application. Continuing with our earlier example, the following design would not be ideal:</p>
<pre><code class="language-csharp">UpsertDataSourceAsync(string context, string url)
{
    _dal.UpsertDataSourceAsync(context, url);
}
</code></pre>
<p>In this case, the DAL knows about url and contexts. These are application artefacts. It's essential that layers are as ignorant about the application as possible, meaning completely ignorant. The main idea is to make these layers as reusable as possible. Independent on reusability, it is good to follow single responsibility pattern in each component. Consider transporting this layer to another application that needs to work with cars and houses data - having to handle context, url or any application specific detail might not be the best approach.</p>
<p>A better approach for DAL design would look like this:</p>
<pre><code class="language-csharp">UpsertDataSourceAsync(string context, string url)
{
        _dal.UpsertVectorAsync(dataSourceCollectionName, 
        new Payload{ url = url});
}
</code></pre>
<p>Here, the DAL takes the responsibility of creating the payload in the given collection of the vector database. The API that implements <em>UpsertDataSourceAsync</em> is the only player here that needs to understand the bigger picture (context and url), allowing the DAL and HAL to remain efficient, simple, and reusable.</p>
<p>To conclude, the modular layered architecture truly shines when it comes to separating concretes from the abstracts, enabling the creation of a versatile, reusable, and maintainable backend architecture.</p>
</div>]]></content:encoded></item><item><title><![CDATA[What is the proper way to read configuration and settings?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>.Net applications have a standard process for handling application configuration and settings. In my code reviews, I've observed that developers often approach configuration in a variety of unconventional or &quot;creative&quot; ways, which is largely incorrect. It's crucial to ensure your code can consistently load the configuration from specific</p></div>]]></description><link>https://developers.de/2024/01/13/what-is-the-proper-way-to-read-configuration-and-settings/</link><guid isPermaLink="false">62693ddae03af60a94dd1b55</guid><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Sat, 13 Jan 2024 11:31:39 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>.Net applications have a standard process for handling application configuration and settings. In my code reviews, I've observed that developers often approach configuration in a variety of unconventional or &quot;creative&quot; ways, which is largely incorrect. It's crucial to ensure your code can consistently load the configuration from specific locations, such as Environment Variables, Command-Line Arguments, and the application settings file, known as appsettings.json.<br>
To achieve this, adhere to the following initialization process:</p>
<pre><code>private static IConfigurationRoot InitializeConfiguration(string[] args)
{
    var builder = new ConfigurationBuilder()
         .SetBasePath(Directory.GetCurrentDirectory())
         .AddJsonFile(&quot;appsettings.json&quot;, optional: false, 
          reloadOnChange: true)
         .AddCommandLine(args)
         .AddEnvironmentVariables();

    return builder.Build();
}
</code></pre>
<p>This code generates the instance of a builder that allows you to manage configuration values independently of their origins. Why is this significant? Generally, we aren't certain about the packaging conditions of your library (code), or how it will be executed. For instance, your code could be operative within ASP.NET, functioning as a console application, running in a Docker container, and so on. All of these application types can utilize a variety of methods for providing the configuration, which can offer unique advantages and disadvantages depending on where the code is executed. For example, if your code is running as a console application, it's beneficial to have the settings within the appsettings.json file. However, if the same code is deployed within a Docker container, supplying the configuration as environment variables could be more effective. Therefore, it's ideal to design your code to handle all possibilities, allowing the DevOps team to make the final decision on how to provide the configuration.</p>
<p>Following code samples demonstrate how to read imple and complex configuration values (settings).</p>
<pre><code class="language-csharp">    //
    // Following are comming from command line.
    var color = configuration[&quot;color&quot;];
    Console.WriteLine(&quot;{0}&quot;, color);

    var fontSize = configuration[&quot;fontSize&quot;];
    var state = configuration[&quot;state&quot;];

    //
    // From root of appsettings.json
    var setting1 = configuration[&quot;Setting1&quot;];
    var setting2 = configuration[&quot;Setting2&quot;];
    var setting3 = configuration[&quot;Setting3&quot;];
    var sleepyState = configuration[&quot;SleepyState&quot;];
    var aaa = configuration[&quot;AAAA&quot;];
    var speed = configuration[&quot;Speed&quot;];

    float i = float.Parse(setting3, CultureInfo.InvariantCulture);
    //
    // Demonstrates how to read settings from sub section.
    var section = configuration.GetSection(&quot;MySubSettings&quot;);
    var subSetting1 = section[&quot;Setting1&quot;];
    var subSetting2 = section[&quot;Setting2&quot;];
    var subSetting3 = section[&quot;Setting3&quot;];
</code></pre>
<p>Following code shows how to read environment variables from the configuration. Please not the code has not any direct dependency to Environment class.</p>
<pre><code class="language-csharp">
    var machineName = configuration[&quot;COMPUTERNAME&quot;];
    var processor = configuration[&quot;PROCESSOR_IDENTIFIER&quot;];
    
</code></pre>
<p>The variable COMPUTERNAME might also be specified inside appsettings.json file or provided as a command line argument.</p>
<p>More complex configuration is read as shown in following example:</p>
<pre><code class="language-csharp">
   MySettings mySettings = new MySettings();
   configuration.GetSection(&quot;MySetting&quot;).Bind(mySettings);

</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[What if "Semantic search is not enabled for this service."?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When consuming the Azure OpenAI service, following error might occure:</p>
<blockquote>
<p>{&quot;error&quot;: {&quot;requestid&quot;: &quot;194182cc-cdc0-400a-8914-87c3e6fd7fe2&quot;, &quot;code&quot;: 400, &quot;message&quot;: &quot;An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message='Server responded with status 400. Error message: {&quot;error&quot;</p></blockquote></div>]]></description><link>https://developers.de/2023/12/11/what-if-semantic-search-is-not-enabled-for-this-service/</link><guid isPermaLink="false">6576f2e11dee311b782de7e4</guid><category><![CDATA[GPT]]></category><category><![CDATA[LLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Mon, 11 Dec 2023 11:36:29 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2023/12/111136_SemanticRanger.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2023/12/111136_SemanticRanger.png" alt="What if "Semantic search is not enabled for this service."?"><p>When consuming the Azure OpenAI service, following error might occure:</p>
<blockquote>
<p>{&quot;error&quot;: {&quot;requestid&quot;: &quot;194182cc-cdc0-400a-8914-87c3e6fd7fe2&quot;, &quot;code&quot;: 400, &quot;message&quot;: &quot;An error occurred when calling Azure Cognitive Search: Azure Search Error: 400, message='Server responded with status 400. Error message: {&quot;error&quot;:{&quot;code&quot;:&quot;FeatureNotSupportedInService&quot;,&quot;message&quot;:&quot;Semantic search is not enabled for this service.\\r\\nParameter name: queryType&quot;,&quot;details&quot;:[{&quot;code&quot;:&quot;SemanticQueriesNotAvailable&quot;,&quot;message&quot;:&quot;Semantic search is not enabled for this service.&quot;}]}}', url=URL('<a href="https://host.search.windows.net/indexes/semantic-index-with-embeddings/docs/search?api-version=2023-07-01-Preview">https://host.search.windows.net/indexes/semantic-index-with-embeddings/docs/search?api-version=2023-07-01-Preview</a>')\nCall to Azure Search instance failed.\nAPI Users: Please ensure you are using the right instance, index_name and provide admin_key as the api_key.\n&quot;}}</p>
</blockquote>
<p>The error happens if the Semantic Plan is NOT activated in the cognitive search servide. To enable the plan, plese select the <em>Semantic Ranker</em> and then activate the plan.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/12/111133_SemanticRanger.png" alt="What if "Semantic search is not enabled for this service."?"></p>
</div>]]></content:encoded></item><item><title><![CDATA[How to group files together in Visual Studio]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When working on large projects, we usually design the API(s) to implement the most requirements. Sometimes, the API might contain a lot of methods. In such cases, it is recommended to split the methods of the API into multiple classes. However, there is no rule that defines the exact</p></div>]]></description><link>https://developers.de/2023/10/25/how-to-group-files-together-in-visual-studio/</link><guid isPermaLink="false">6536083583cd651a6c1ea277</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Wed, 25 Oct 2023 07:30:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When working on large projects, we usually design the API(s) to implement the most requirements. Sometimes, the API might contain a lot of methods. In such cases, it is recommended to split the methods of the API into multiple classes. However, there is no rule that defines the exact threshold for the number of methods inside the API to start splitting the API class into multiple classes.</p>
<p>For example, a complex <strong>MyApi</strong> might be split into classes <strong>MyApi1</strong>, <strong>MyApi2</strong>, and so on. This sounds simple, but splitting into multiple APIs might also have many disadvantages. If so, you keep the implementation in <strong>MyApi</strong>, but that one will be complex to manage inside the team.</p>
<p>One interesting solution that we use in projects is to allow <strong>MyApi</strong> to grow, but to split the implementation into multiple files:</p>
<pre><code class="language-csharp">MyApi.cs
MyApiPart1.cs
MyApiPart2.cs
</code></pre>
<p>In Visual Studio these files look like:</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/10/23554_vsfilesgrouping1.png" alt="23554_vsfilesgrouping1"></p>
<p>To have a better structure of files in the solution explorer we group files together.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/10/23618_vsfilesgrouping2.png" alt="23618_vsfilesgrouping2"></p>
<p>To achive this, following must be done in <em>.csproj</em> file.</p>
<pre><code class="language-xml">  &lt;ItemGroup&gt;

    &lt;Content Include=&quot;MyApi.cs&quot; /&gt;
    &lt;Content Include=&quot;MyApiPart1.cs&quot;&gt;
      &lt;DependentUpon&gt;MyApi.cs&lt;/DependentUpon&gt;
    &lt;/Content&gt;
    &lt;Content Include=&quot;MyApiPart2.cs&quot;&gt;
      &lt;DependentUpon&gt;MyApi.cs&lt;/DependentUpon&gt;
    &lt;/Content&gt;
  &lt;/ItemGroup&gt;
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Chat GPT at VDMA]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Marco and I will be sharing the AI/GPT workshop redelivery today. All VDMA attendees will have the opportunity to learn a lot about GPT and the solutions that can be created using this technology. Furthermore, we will dive deep into the LLM (Large Language Model) technology stack and explain</p></div>]]></description><link>https://developers.de/2023/09/12/chat-gpt-at-vdma/</link><guid isPermaLink="false">65001fea8f7acc1c5c488fc4</guid><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 12 Sep 2023 08:31:14 GMT</pubDate><media:content url="https://developersde.blob.core.windows.net/usercontent/2023/9/12831_ChatGPT%20VDMA.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://developersde.blob.core.windows.net/usercontent/2023/9/12831_ChatGPT%20VDMA.png" alt="Chat GPT at VDMA"><p>Marco and I will be sharing the AI/GPT workshop redelivery today. All VDMA attendees will have the opportunity to learn a lot about GPT and the solutions that can be created using this technology. Furthermore, we will dive deep into the LLM (Large Language Model) technology stack and explain how it all works.</p>
<h2 id="agenda">AGENDA</h2>
<h3 id="1100ergebnisstimmungsbildabfragevorstellungsrundefragendestages">11:00 Ergebnis Stimmungsbildabfrage; Vorstellungsrunde, Fragen des Tages</h3>
<p>Dr. Nora Lauterbach, VDMA Landesverband Mitte</p>
<h3 id="1120chatgptimunternehmensalltag">11:20 Chat-GPT im Unternehmensalltag</h3>
<ul>
<li>Wer kann und wie muss man Chat-GPT verwenden, um sinnvolle<br>
Arbeitsunterstützung zu bekommen?</li>
<li>Chat-GPT zum Einsatz bringen - öffentliche Daten / Firmendaten</li>
<li>Worauf kann Chat-GPT zugreifen?<br>
Marco Richardson, Gründer &amp; Vorstand, Microsoft Regional Director, Inclusify AG,<br>
Nürnberg</li>
</ul>
<h3 id="1200fragendiskussionen">12:00 Fragen/Diskussionen</h3>
<p>Alle Teilnehmer, Moderation: Dr. Nora Lauterbach, VDMA Landesverband Mitte</p>
<h3 id="1215chatgpteinordnung">12:15 Chat-GPT – Einordnung</h3>
<ul>
<li>Was ist GPT und wie funktioniert es?</li>
<li>Was unterscheidet GPT von künstlicher Intelligenz und Machine Learning?</li>
<li>Für welche Einsatzbereiche eignet sich welche Technologie?</li>
</ul>
<p>Damir Dobric, Lead Software Architect, Microsoft Regional Director, daenet Gesellschaft für Informationstechnologie mbH, Frankfurt a.M.</p>
<h3 id="1300fragendiskussionen">13:00 Fragen/Diskussionen</h3>
<p>Alle Teilnehmer, Moderation: Dr. Nora Lauterbach, VDMA Landesverband Mitte</p>
<h3 id="1315einladungzumgemeinsamenmittagessen">13:15 Einladung zum gemeinsamen Mittagessen</h3>
<h3 id="1415workshopsinkleingruppen">14:15 Workshops in Kleingruppen</h3>
<ul>
<li>Chat-GPT in das Unternehmen einführen: Wie starte ich ein Projekt?</li>
<li>Chat-GPT - Design Thinking - Wer kann wie davon profitieren. Business Cases<br>
finden und designen.</li>
<li>Fragen des Tages<br>
Alle Teilnehmer, Moderation: Dr. Nora Lauterbach, VDMA Landesverband Mitte</li>
</ul>
<h3 id="1540kurzprsentationausgewhlteergebnisseworkshopfeedbackcheckout">15:40 Kurzpräsentation ausgewählte Ergebnisse Workshop, Feedback, Check-Out</h3>
<p>Gewählte Gruppensprecher, Moderation Dr. Nora Lauterbach</p>
<h3 id="1600offiziellesendedeserfasfreieraustauschmitreferentenundnetworkingim">16:00 Offizielles Ende des Erfas/freier Austausch mit Referenten und Networking im</h3>
<p>Teilnehmerkreis.</p>
<p><img src="https://developersde.blob.core.windows.net/usercontent/2023/9/12830_ChatGPT%20VDMA.png" alt="Chat GPT at VDMA"></p>
</div>]]></content:encoded></item><item><title><![CDATA[GPT Day@VDMA in Frankfurt 2023]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the agenda of our GPT Day at VDMA in Frankfurt am Main.</p>
<h3 id="agenda">Agenda</h3>
<p><em>When: Di. 18.07.23</em></p>
<h4 id="10301100uhr">10:30 - 11:00 Uhr</h4>
<p><strong>Early Bird für Netzwerker</strong></p>
<h4 id="11001120uhr">11:00 - 11:20 Uhr</h4>
<p><strong>Ergebnis Stimmungsbildabfrage; Vorstellungsrunde, Fragen des Tages</strong></p>
<p>Dr. Nora Lauterbach<br>
Referentin<br>
VDMA Landesverband Mitte, Frankfurt</p></div>]]></description><link>https://developers.de/2023/07/18/gpt-day-vdma-in-frankfurt-2023/</link><guid isPermaLink="false">64b5b13a6736ad13a4dce76b</guid><category><![CDATA[GPT]]></category><dc:creator><![CDATA[Damir Dobric]]></dc:creator><pubDate>Tue, 18 Jul 2023 08:00:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This is the agenda of our GPT Day at VDMA in Frankfurt am Main.</p>
<h3 id="agenda">Agenda</h3>
<p><em>When: Di. 18.07.23</em></p>
<h4 id="10301100uhr">10:30 - 11:00 Uhr</h4>
<p><strong>Early Bird für Netzwerker</strong></p>
<h4 id="11001120uhr">11:00 - 11:20 Uhr</h4>
<p><strong>Ergebnis Stimmungsbildabfrage; Vorstellungsrunde, Fragen des Tages</strong></p>
<p>Dr. Nora Lauterbach<br>
Referentin<br>
VDMA Landesverband Mitte, Frankfurt am Main, Deutschland</p>
<h4 id="11201200uhr">11:20 - 12:00 Uhr</h4>
<p><strong>Chat-GPT im Unternehmensalltag</strong></p>
<ul>
<li>Wer kann und wie muss man Chat-GPT verwenden, um sinnvolle Arbeitsunterstützung zu bekommen?</li>
<li>Chat-GPT zum Einsatz bringen - öffentliche Daten / Firmendaten</li>
<li>Worauf kann Chat-GPT zugreifen?</li>
</ul>
<p>Marco Richardson<br>
Gründer &amp; Vorstand<br>
Inclusify AG, Nürnberg, Deutschland<br>
Microsoft Regional Director</p>
<h4 id="12001215uhr">12:00 - 12:15 Uhr</h4>
<p><strong>Fragen/Diskussionen</strong></p>
<p>Alle Teilnehmer</p>
<h4 id="12151300uhr">12:15 - 13:00 Uhr</h4>
<p><strong>Chat-GPT – Einordnung</strong></p>
<ul>
<li>Was ist GPT und wie funktioniert es?</li>
<li>Was unterscheidet GPT von künstlicher Intelligenz und Machine Learning?</li>
<li>Für welche Einsatzbereiche eignet sich welche Technologie?</li>
</ul>
<p>Damir Dobric<br>
Lead Software Architect<br>
daenet Gesellschaft für Informationstechnologie mbH, Frankfurt am Main, Deutschland<br>
Microsoft Most Valuable Professional Azure + , Microsoft Regional Director</p>
<h4 id="13001315uhr">13:00 - 13:15 Uhr</h4>
<p><strong>Fragen/Diskussionen</strong><br>
Alle Teilnehmer</p>
<h4 id="13151415uhr">13:15 - 14:15 Uhr</h4>
<p><strong>Mittagessen</strong></p>
<h4 id="14151540uhr">14:15 - 15:40 Uhr</h4>
<p><strong>Workshops in Kleingruppen</strong></p>
<ul>
<li>Chat-GPT in das Unternehmen einführen: Wie starte ich ein Projekt?</li>
<li>Chat-GPT Design Thinking: Wer kann wie davon profitieren. Business Cases finden und designen.</li>
<li>Fragen des Tages</li>
</ul>
<p>Alle Teilnehmer</p>
<h4 id="15401600uhr">15:40 - 16:00 Uhr</h4>
<p>Kurzpräsentation ausgewählte Ergebnisse Workshop, Feedback, Check-Out</p>
</div>]]></content:encoded></item></channel></rss>