It is good but I need to deal with firewalls and also to manage lost data when the destination PC is off. I need something that can be more trustable and also the operator in destination may not approve or reject the transfer request at the same day
Столкнулся с такой же проблемой, нашёл это обсуждение... Вроде всё делал как нужно - но никак не работало.
РЕШЕНИЕ: сработало только тогда, когда я нажал кнопку создания сводной таблицы непосредственно из Pivot.
Причем после этого стали нормально работать и таблицы прежде созданные из стандартного интерфейса (на других листах).
I'd like to find the commit that removes the line
This is trivial to do in gitk. Right click on the deleted line and select "Show origin of this line":
// Compatible with older versions of iOS 10 browsers
// "react-markdown": "^8.0.7",
// "rehype-raw": "^6.1.1",
// "remark-gfm": "^3.0.1",
<ReactMarkdown
rehypePlugins={[rehypeRaw]}
remarkPlugins={[remarkGfm]}
>{markdown}</ReactMarkdown>
That was my first project, i jumped in to code, but created new project workspace, and initiated git and pushed first before coding, It automatically invoked an extension that Gave a code for "github Authentication" to connect my device to github and I succesfully pushed new one and then tried my first project it again generated a codeto connect my device, Successfully pushed
You only need to stub it in the component where you're using router-link. Just ran into this warning and seems not well documented it can be solved like so:
import { mount, RouterLinkStub } from "@vue/test-utils";
import { describe, expect, it } from "vitest";
import SomeComponent from "@/components/SomeComponent.vue";
describe("SomeComponent tests", () => {
it("Renders my component with links in it", () => {
const some_comp = mount(SomeComponent, {
props: {
label: "some text",
link: "/some/url/here",
},
global: {
stubs: {
"router-link": RouterLinkStub,
},
},
});
// it renders and we check the link is set:
expect(some_comp.text()).toContain("some text");
const link_component = some_comp.findComponent(RouterLinkStub);
expect(link_component.props().to).toBe("/some/url/here");
});
});
Thanks @Friede, but the that doesn't even begin to address the problem.
Collect all contents in a package.
To successfully set the background color when using image as a launch screen:
create a Color Set in Assets.xcassets, name it, eg. splash-background-color
add in Info.plist "Launch Screen" > "Background color" > enter named color eg. splash-background-color
restart iOS Simulator - this is crucial (and stupid) ! Xcode 26.0 will show you just a blank screen until you restart simulator after any Launch Screen related changes! Clean does not help
npm config get prefix -g
The change since 2013 is that it needs a -g
A) No. B) no one is going to give you access to their account.
Thanks. "Better" basically means "running/working" as my approach does not work in the context of the Shiny app. "Secure" means that the function loaded from an external file runs inside a sandbox environment. The sandbox is needed - or at least I would say so - to reduce the risk of importing malicious code. The whole idea of the app is that it is later used by a less tech-savvy person to provide access to data that was previously not available for researchers. Now I could surely implement a bunch of "shapes of data" to transform the logdata into, but it would be more extensible if additional functions for the transformation of logdata can be simply loaded from file. This allows experts in the field of data analysis to provided additional data wrangling functions by simply sending a file to the person operating the Shiny app. So, no, I would not call it an XY problem.
How I am calling the functions: I take the source code of the function stored in the variable fun and then evaluate the function in the sandbox: eval(parse(text = fun), envir = sandbox_env) . I then get the function object env_fun <- get(fun_name, envir = sandbox_env, inherits = FALSE) and execute the function result <- env_fun(df).
So I don't simply pass the function itself as it is loaded from a file and run in a different environment. As this approach works when run in a simply R script but not in a Shiny app, I assume it has something to do with the environment.
use this version : pip install "langchain==0.3.27"
there is a CityInfo.db database present in the location
/System/Library/PrivateFrameworks/AppSupport.framework/Resources
it contains cities table which contain the time zone data as same as world clock in alarm app
String keyStorePassword = "NEWPASSWORD";
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
File file = new File(dir, "xyz.jks");
keyStore.load(new FileInputStream(file), keyStorePassword.toCharArray());
"better" needs a metric against which to judge candidates. How do you define better?
Similarly, how do you define "secure"?
Some questions:
Why pass the body of the function as a character string (as well as the name of the function, when you could simply pass the function itself?
I don't understand the need for a sandbox environment. Please expand on you logic.
Please show how you are calling summarise_per_week inside shape_logdata. Your current example is not at all clear.
"obviously the dataframe is not available to the function when run inside a Shiny app". Based on the information you've provided, it's not at all clear to me that the problem is that the data.frame is not available.
I wonder whether this is may be an XY problem: why not provide the functionality you want to give users in a package and load the package as part of the app's start up? [That's the way I would (and have) addressed similar issues, but I accept the suitability of that approach may depend on what you mean by "better" and "secure".]
The following syntax works. In addition, the access level of the token needs to be Reporter or higher.
uvx --from git+https://gitlab_username:[email protected] tool
Regarding your questions:
Q1: What does the ENVELOPE_ALLOWANCE_EXCEEDED error mean in Docusign sandbox?
It means you have reached the maximum number of envelopes allowed in your developer sandbox account and cannot send more.
Q2: Can I reset or increase the envelope allowance on my developer sandbox account?
No. Docusign does not reset or increase the envelope limit for sandbox accounts.
Q3: What should I do if I reach the envelope limit in my sandbox?
Request or create a new developer sandbox account to continue testing.
Q4: Can I use a trial (production) account to create new Integration Keys and continue testing?
No. Trial (production) accounts do not allow creating new Integration Keys. Integration Keys can only be created in sandbox accounts.
Q5: What is the proper workflow for testing Docusign integrations?
Create Integration Keys and test all features in your sandbox. When testing is complete, submit a Go-Live request to move your application to production.
Because Outlook frequently has trouble with large or sophisticated MSG data, managing Outlook MSG files with thousands of recipients can be challenging. Using a specialist conversion tool that can safely handle large amounts of data without losing formatting, headers or recipients is the most dependable approach. A decent choice is the Softaken MSG to PST Converter. It converts MSG files into a clear, error-free PST file, supports batch import and preserves all email attributes, including lengthy recipient lists. This program is compatible with all versions of Outlook and maintains folder structure.
Did you mean to post this on codereview?
This is how I solved it.
I added more designers to the BOM file.
And I divided the number of LEDs for each one.
To add more designers, just copy the first row and divide the number of LEDs in the designers section.
Im having the same error once i add new rows above my table with pivot tables take data from is there a way to fix ? I want few empty rows about that table
@RequestMapping is <> @HttpExchange.
You have to use:
MultiValueMap<String, String> params = new LinkedMultiValueMap<>();
@GetExchange("/api/v1.0/question")
Mono<String> question(@RequestParam MultiValueMap<String, String> params);
whoever reading this if you are having the same issue take a look at this github issue https://github.com/dotnet/runtime/issues/119648. In the github issue you will find the solution but for reference here is the code that works:
using System;
using System.Reflection;
using System.Runtime.Loader;
namespace Host {
public class PluginAssemblyLoadContext : AssemblyLoadContext {
public PluginAssemblyLoadContext() : base(true) { }
}
public class HostState {
public Assembly pluginAssembly;
public PluginAssemblyLoadContext loadContext;
public Action pluginExecute;
}
public static class Host {
public const string PLUGIN_PATH = "../plugin/bin/Debug/net10.0/plugin.dll";
public static void Main() {
HostState host = new HostState();
host.loadContext = new PluginAssemblyLoadContext();
ReloadDll(host);
Console.WriteLine("Press E to execute the plugin, R to reload, or Q to quit.");
for (;;) {
var key = Console.ReadKey(true);
if (false) {
} else if (key.Key == ConsoleKey.E) {
host.pluginExecute.Invoke();
} else if (key.Key == ConsoleKey.R) {
ReloadDll(host);
} else if (key.Key == ConsoleKey.Q) {
return;
}
}
}
public static void ReloadDll(HostState host) {
host.loadContext = new PluginAssemblyLoadContext();
byte[] assemblyBytes = File.ReadAllBytes(Path.GetFullPath(PLUGIN_PATH));
using MemoryStream ms = new MemoryStream(assemblyBytes);
host.pluginAssembly = host.loadContext.LoadFromStream(ms);
Type type = host.pluginAssembly.GetType("Plugin");
MethodInfo methodInfo = type.GetMethod("Execute", BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static, []);
if (methodInfo != null) {
host.pluginExecute = methodInfo.CreateDelegate<Action>();
}
}
}
}
In the past anyone could use BASE_URL + ControllerName within the http request in Angular + .NET Core to call the server Controller, but in the latest version things work a little bit different.
I've investigated the problem why I couldn't reach server controllers just by passing "/ControllerName" in http requests.
So basicly Angular 18 + .NET core has 2 files which required to be edited as angular communicates trough proxy proxy.conf.js from Angular side and #ProjectName#.server.http file from the server side.
In both files you have to provide the name of the controller to establish communication.
Therefore BASE_URL is not required and you can just call "/ControllerName" in http requests.
Hello Vinicius,
thank you so much for your efforts and support, it really helped! To make things unfortunately a bit more complicated: this plot is part of a patchwork plot with a total of 4 plots, 2 on top and 2 on the bottom. I’ve now set it so that the top and bottom plots each share the same Y-axis alignment, so that the positive and negative values start from the same position.
However, this has caused geom_text to be displayed incorrectly again. Is there a solution for this within the existing code?
# Patchwork Arrangement:
plot_oben <- p1 | p3
plot_unten <- p2 | p4
plot <- plot_oben / plot_unten
# shared Y-Axis:
# p_oben (p1 and p3)
max_oben <- max(
data_schule_schulform$Anzahl,
data_2_schule_schulform$Anzahl,
data_schule_jahrgang$Anzahl,
data_2_schule_jahrgang$Anzahl
)
# p_unten (p2 and p4)
max_unten <- max(
data_schulform_1$Anzahl,
data_schulform_2$Anzahl,
data_schulform_3$Anzahl,
data_schulform_4$Anzahl
)
Below each plot i added for example...
p1 <- p1 +
scale_y_continuous(limits = c(-max_oben, max_oben))
p2 <- p2 +
scale_y_continuous(limits = c(-max_unten, max_unten))
...
You need to add "_" before and after the minus sign
something like this
h-[calc(100vh_-_230px)]
input:focus {
border: 1px solid black;
}
This is working Try it!
When I use the action: keep or action: drop operations, the subsequent action: replace operations will not take effect, and the filtering operations for keep or drop matching also fail to work properly, using the same simple matching as described above. When I comment out the keep or drop operations, the final action: replace operations successfully add labels based on the metadata.
First you have to make sure you configured you application to load the config correctly:
Example from Serilog:
static void Main(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json")
.AddJsonFile($"appsettings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Production"}.json", true)
.Build();
var logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
logger.Information("Hello, world!");
}
If you did this you have to correctly structure your json like this: https://github.com/serilog/serilog-sinks-rollingfile?tab=readme-ov-file#controlling-event-formatting
{
"Serilog": {
"WriteTo": [
{ "Name": "RollingFile", "Args": { "pathFormat": "log-{Date}.txt" } }
]
}
}
@ch4mp thanks for the answer and the article. While the gateway approach is a bit of an overkill for my scenario (I have a single frontend application), it pushed me to the right direction. I now have a proxy controller that intercepts each browser call, get the JWT token from the session, and forwards the call with JWT token to stateless REST endpoints
Can you try using Navigator.push instead of Navigator.pop to see if the issue persists? Also, if you are using any packages related to navigation or state management, please let me know. I can connect with you and help resolve the issue.
I repeated the steps with Intelij (same as any other JetBrains' products, such as Rider). It works perfectly. For the context, the built-in SQLite driver in JetBrains IDEs does not support SQLCipher encryption. The reason you see that error is simply because that default sqlite driver just see your encrypted database as a bunch of random bytes with no meaning.
First, download the sqlite-jdbc-crypt (I downloaded sqlite-jdbc-3.50.1.0.jar)
Second, define a custom driver:
Third, add your driver file that you downloaded from the first step
It must look like this:
Then, use that recently added driver as your "Data Source".
Finally, add a url like this containing the address to your encrypted sqlite db file, your key, kdf_iter, etc: jdbc:sqlite:file:/home/USER/test_database_v4.db?cipher=sqlcipher&legacy=4&kdf_iter=256000&cipher_page_size=4096&key=mySecretPassword123
That's it!
In the comments section, Nate Eldredge left the following answer regarding .wrapping_shl():
They're identical to the point that the compiler just emits the code once and defines the other versions as aliases: godbolt.org/z/jGW3b6K1c
Unreal Engine(이하 UE)으로 Android 기반 XR(VR/AR) 프로젝트를 시작하려는 시점에서 겪는 혼란은 지극히 정상입니다. Unity는 AR Foundation이나 XR Interaction Toolkit으로 상대적으로 경로가 명확한 반면, Unreal은 최근 1~2년 사이 OpenXR로 대전환을 하면서 문서가 파편화되어 있습니다.
결론부터 말씀드리면, **"Unreal은 Android XR을 공식 지원하지만, '순수 OpenXR'만으로는 부족하며 하드웨어 제조사(Meta, Pico 등)의 플러그인과 함께 사용해야 가장 안정적"**입니다.
질문하신 내용을 바탕으로 실무 관점의 현황과 설정을 정리해 드립니다.
현재 가장 안정적인 Unreal Engine 5.3~5.4 버전을 기준으로 한 표준 설정입니다. 버전이 맞지 않으면 패키징 오류가 발생할 확률이 매우 높습니다.
구분권장 설정 및 버전비고Engine VersionUE 5.4 (권장) 또는 5.35.4에서 XR 렌더링 성능(Vulkan)이 대폭 개선되었습니다.Android StudioFlamingo 또는 GiraffeUE 버전에 따라 다릅니다. (UE 5.4는 Flamingo/Giraffe 권장)NDKr26b (UE 5.4) / r25b (UE 5.3)프로젝트 설정에서 정확한 경로를 지정해야 합니다.JDKOpenJDK 17JAVA_HOME 환경변수 설정 필수입니다.Build SystemGradleAGP(Android Gradle Plugin) 8.x 대 버전을 사용하게 됩니다.Min SDK29 (Android 10) 이상XR 장비(Quest 3 등)는 보통 최신 OS를 사용하므로 29~32로 설정합니다.Target SDK32 또는 34구글 플레이 스토어 등록 시 최신 버전 요구 사항을 확인해야 합니다.
[필수 플러그인 활성화]
OpenXR: Enabled (필수, 엔진 코어 기능)
OpenXRHandTracking: Enabled (손 추적 필요 시)
Mobile Foveated Rendering: Enabled (성능 최적화 필수)
질문하신 **"공급업체 통합 없이 OpenXR만으로 작동하는가?"**에 대한 답은 **"작동은 하지만, 상용화 수준을 위해서는 공급업체 플러그인이 필수"**입니다.
순수 OpenXR (Native):
Unreal의 OpenXR 플러그인만 켜도 Meta Quest나 Pico 등에서 앱을 실행하고, 헤드 트래킹과 기본 컨트롤러 입력을 받을 수 있습니다.
문제점: 각 제조사 고유의 기능(예: Meta의 Passthrough, Scene Understanding, Pico의 특정 컨트롤러 모델링, 주사율 제어 등)은 표준 OpenXR API에 아직 포함되지 않았거나 확장(Extension) 형태로 존재합니다.
현실적인 워크플로 (Hybrid):
Base: OpenXR Plugin을 켭니다 (표준 API 처리).
Extension: 타겟 하드웨어 플러그인을 추가로 켭니다.
Meta Quest: Meta XR Plugin (OpenXR 기반으로 작성됨, 필수 기능 제공)
Pico: Pico OpenXR Plugin
Android (Handheld AR): Google ARCore 플러그인
이 방식이 Unity의 XR Plug-in Management 시스템과 유사하게 작동합니다.
이 부분이 문서에서 가장 헷갈리는 지점입니다. 'Android XR'이라는 용어가 두 가지를 혼용합니다.
핸드헬드 AR (스마트폰/태블릿):
기술: Google ARCore를 사용합니다.
설정: Google ARCore 플러그인을 켜고, 프로젝트 설정에서 Configure Google ARCore를 실행해야 합니다.
현황: Unity의 AR Foundation에 비해 Unreal의 AR 지원은 기능 업데이트가 느린 편입니다. 단순한 AR은 가능하지만, 복잡한 상호작용은 C++ 작업이 필요할 수 있습니다.
HMD VR/MR (Quest, Pico 등 Android 기반):
기술: OpenXR을 사용합니다.
설정: OpenXR + Vendor Plugin 조합을 사용합니다. ARCore는 사용하지 않습니다(Passthrough는 벤더 SDK로 처리).
현황: Unreal 5의 Nanite와 Lumen이 모바일(Android) XR에서 제한적으로 지원되기 시작하면서, 그래픽 퀄리티 면에서는 Unity보다 잠재력이 큽니다.
Unity 대비 Unreal로 Android XR을 개발할 때 겪게 될 현실적인 장벽입니다.
초기 설정의 난이도 (Android Setup):
Unreal은 SetupAndroid.bat 스크립트를 제공하지만, Java 버전이나 NDK 버전이 조금만 꼬여도 빌드가 실패합니다. Unity 허브처럼 자동으로 관리해주지 않습니다.
해결: 프로젝트 시작 전 "Android Turnkey" 설정을 통해 모든 SDK 경로가 초록색(Valid)인지 확인해야 합니다.
성능 및 빌드 크기:
빈 프로젝트도 APK 용량이 Unity보다 큽니다 (기본 100MB~).
모바일 GPU에서 Unreal의 렌더링 파이프라인은 무겁습니다. Forward Shading을 켜고, Instanced Stereo Rendering 또는 Mobile Multi-View를 반드시 설정해야 프레임 방어가 가능합니다.
공식 문서의 부족:
Unreal 공식 문서는 최신 내용을 즉각 반영하지 못하는 경우가 많습니다.
팁: Epic Games 문서보다는 Meta의 Unreal 개발자 문서나 Pico 개발자 문서를 메인으로 참고하는 것이 훨씬 정확합니다.
Google의 새로운 "Android XR":
출처:goole
Using Ruby on Rails
You have #{n} #{'kid'.pluralize(n)}
See pluralize doc for options & alternatives:
https://api.rubyonrails.org/classes/ActionView/Helpers/TextHelper.html#method-i-pluralize
Fortnite restricts mouse movements from 3rd party programs, your choice is to make a kernel driver that attaches to fortnite and hopefully dont get banned,
or be safe using andriuno
// Source - https://stackoverflow.com/a/63920302
// Posted by matdev, modified by community. See post 'Timeline' for change history
// Retrieved 2025-11-25, License - CC BY-SA 4.0
buildTypes {
debug{...}
release{...}
}
// Specifies one flavor dimension.
flavorDimensions "main"
productFlavors {
demo {
// Assigns this product flavor to the "main" flavor dimension.
// If you are using only one dimension, this property is optional,
// and the plugin automatically assigns all the module's flavors to
// that dimension.
dimension "main"
applicationId "com.appdemo"
versionNameSuffix "-demo"
}
full {
dimension "main"
applicationId "com.appfull"
versionNameSuffix "-full"
}
}
Date: 13/11/2025
To:
Finance Department
HungerStation Delivery Company
Subject: Notification of Change in Bank Account IBAN Details
Dear HungerStation Delivery Team,
We would like to inform you that the bank account details of Malbriz Arabia Company have been updated. Kindly take note of the new IBAN information provided below and ensure that all future payments, transfers, or transactions are made to the updated account.
Previous Bank Details:
· Bank Name: Saudi National Bank
· Account Name: مطعم الأرز المفضل لتقديم الوجبات
· Old IBAN: SA8110000001400023615710
New Bank Details:
· Bank Name: Saudi National Bank
· Account Name: Malbriz Arabia Co
· New IBAN: SA8110000001400023615710
· SWIFT/BIC (if applicable): NCBKSAJE
Please update your records accordingly to avoid any interruption in payments. The old IBAN will no longer be in use after 01/08/2025.
We request you to kindly confirm the update of our banking details in your records.
Thank you for your continued support and cooperation.
Yours sincerely,
Muhammed Shahin
General Manager
Malbriz Arabia Company
التاريخ: 13/11/2025
إلى:
إدارة المالية
شركة هنقرستيشن للتوصيل
الموضوع: إشعار بتغيير بيانات رقم الآيبان البنكي
السادة فريق هنقرستيشن المحترمين،
نود إبلاغكم بأنه قد تم تحديث بيانات الحساب البنكي لشركة مالبريز العربية. وعليه، يرجى أخذ العلم بمعلومات الآيبان الجديدة الموضحة أدناه، والتأكد من تحويل جميع المدفوعات أو الحوالات أو العمليات المالية المستقبلية إلى الحساب المحدّث.
البيانات البنكية السابقة:
اسم البنك: البنك الأهلي السعودي
اسم الحساب: مطعم الأرز المفضل لتقديم الوجبات
رقم الآيبان القديم: SA8110000001400023615710
البيانات البنكية الجديدة:
اسم البنك: البنك الأهلي السعودي
اسم الحساب: شركة مالبريز العربية
رقم الآيبان الجديد: SA8110000001400023615710
SWIFT/BIC (إن وجد): NCBKSAJE
يرجى تحديث سجلاتكم لتجنب أي انقطاع في المدفوعات. علمًا بأن رقم الآيبان القديم لن يكون قيد الاستخدام بعد تاريخ 01/08/2025.
ونرجو منكم التكرم بتأكيد تحديث بياناتنا البنكية لديكم في السجلات.
شاكرين لكم تعاونكم ودعمكم المستمر.
وتفضلوا بقبول فائق الاحترام،
محمد شاهين
المدير العام
شركة مالبريز العربية
xdebug.discover_client_host = true
or
xdebug.client_host = "127.0.0.1"
is the key point for newer xdebug version
After some research, I came across these two forum posts, describing exactly the same behaviour: https://developer.apple.com/forums/thread/778184 and https://developer.apple.com/forums/thread/772999. The answer in both was to enable all interface orientations for iPad.
I tried that via the project settings in Xcode:
Alas, selecting all orientations for iPad did not work.
But then I remembered in our app we also (for reasons) define this set of options programmatically, via the AppDelegate. I applied the same changes there:
func application(_ application: UIApplication, supportedInterfaceOrientationsFor window: UIWindow?) -> UIInterfaceOrientationMask {
switch UIDevice.current.deviceIdiom { // `deviceIdiom` is our own property for handling device idioms
case .phone:
return .allButUpsideDown
case .pad:
return .landscape
case .mac:
return .all // <-- Return `.all` here!
}
}
And, voila, we have content in popovers on Mac Catalyst!
I decided to use the simplest way and just restart the application if certificate is changed using spring actuator.
To do it we should enable restart endpoint int application.properties:
management.endpoint.restart.access=read_only
and in my ContainerConfiguration.java:
autowire RestartEndpoint
my method reloadSSLConfig now looking so:
private void reloadSSLConfig() {
restartEndpoint.restart();
}
PS: also I've found the article about hot reloading SSL in spring: SSL hot reload in Spring Boot 3.2.0
It looks like the litespeed_docref tag is added automatically by the LiteSpeed server or plugin, and you can usually disable it from the LiteSpeed Cache settings under the debug or toolbox section. If there’s no option available, you can remove it using a small code snippet in your theme’s functions file to strip that meta tag from the header. After making the change, give your site a quick check—similar to running a train speed test online to confirm the header is clean and the tag is gone.
The Shortcut control returns a flat list of all keys pressed. In the Windows API Register Hotkey world, keys are distinct from modifiers, but in the input handling world (the control), they are all just "keys". You need to iterate through that list and sort the keys: if it is a modifier (ctrl, shift, alt, win), add it to a flags enum; otherwise, treat it as the specific trigger key.
I use following config it works in my local
cat /opt/homebrew/etc/php/8.3/conf.d/xdebug.ini
xdebug.mode = debug
xdebug.discover_client_host = true
hello,我目前也在开发类似的功能,请问能在百忙之中交流一下吗
go to obj folder this was in your project folder and remove all files from debug releas and x86 folder and then clean your solution and rebuild it will solve your problem
AI in Digital Marketing
Introduction,**
Artificial Intelligence (AI) is transforming the way businesses approach marketing. In digital marketing, AI in digital marketing helps companies understand customer behavior, optimize campaigns, and make data-driven decisions. By integrating AI into marketing strategies, businesses can enhance customer experiences and improve results.
Role of AI in Digital Marketing
AI in digital marketing* is used in various areas, such as:*
Personalization: AI analyzes user behavior to provide personalized marketing content and recommendations.
Automation: Digital marketing AI tools like chatbots and automated email campaigns save time and improve efficiency.
Data Analysis: AI quickly processes large amounts of data to provide insights for AI marketing strategies.
Content Creation: AI tools assist in generating social media posts, ad copy, and blogs for AI in online marketing campaigns.
Popular AI Tools for Digital Marketing
Some effective digital marketing AI tools include:
Chatbots: Automated customer support (e.g., Drift, ManyChat) for better engagement.
Predictive Analytics: Helps forecast future customer behavior, a key part of AI marketing strategies
Content Generation Tools: AI writing platforms (e.g., Jasper, Copy.ai) enhance content creation for AI in online marketing.
Ad Optimization Tools: AI improves ad targeting and ROI on platforms like Google Ads and Facebook Ads.
*
Benefits of AI in Digital Marketing*
Using AI in digital marketing brings many advantages:
Enhanced Customer Experience: Personalized content strengthens customer loyalty.
Cost Efficiency: Reduces manual tasks and increases productivity.
Better Decision Making: Data-driven insights improve AI marketing strategies.
Scalability: Businesses can manage larger campaigns with less effort.
Competitive Advantage: Companies adopting AI benefits in marketing gain an edge over competitors.
Challenges of AI in Digital Marketing
*
While AI in digital marketing is powerful, it comes with challenges:*
Tool Costs: High-end AI tools can be expensive for small businesses.
Privacy Concerns: AI relies heavily on customer data, which must be carefully handled.
Over-reliance on Technology: Too much dependence on AI may reduce human creativity.
Complex Implementation: Learning and using AI tools requires training and technical knowledge.
Conclusion
AI in digital marketing is revolutionizing how businesses connect with customers. From AI marketing strategies to AI benefits in marketing, the technology enables smarter, more personalized campaigns. Embracing AI is essential for businesses to enhance performance, improve ROI, and stay competitive in today’s digital landscape.
SEO Practice Notes
Primary Keyword: AI in Digital Marketing → used in title, intro, subheadings, and conclusion.
Secondary Keywords: sprinkled naturally through the article for search engine optimization.
Internal Linking Tip: Link to other posts like “Top AI Marketing Tools” or “Digital Marketing Trends 2025” for SEO boost.
Meta Description Suggestion: "Learn how AI in digital marketing is transforming online strategies, enhancing customer experiences, and improving ROI with modern AI marketing tools.
Yes, you can build a food delivery website using WordPress and WooCommerce, but you’ll need a few extra plugins to make it work like a real delivery platform. WooCommerce covers the basic online store part, but the delivery features have to be added separately.
A simple setup usually includes:
WooCommerce – for your products and checkout
A restaurant/food menu plugin – to display food items in an easy-to-browse layout
A location or PIN-code checker – to control where deliveries are available
Delivery date and time plugin – so customers can choose when they want their order
Live order status or tracking add-ons – optional but helpful
Delivery partner/driver management tools – if you want to assign orders to riders
A lot of small restaurants start with this kind of WordPress setup before they move to a dedicated mobile app. For example, apps like Cravess (a growing Food Delivery App in Delhi) usually begin with a similar structure and later shift to custom-built systems when they need advanced features like real-time tracking, multi-restaurant support, or automated payouts.
So yes, WooCommerce works fine for a basic food delivery site, but if you plan to scale or add more complex features, you might eventually need a custom solution.
it is usefull in ur case -lightweight -Native -no heavy setu
let me know u want a setup for that or u can figureout
https://github.com/mlocati/docker-php-extension-installer can also be an approach. Your Dockerfile then might look like:
FROM php:8.2-fpm
COPY --from=mlocati/php-extension-installer /usr/bin/install-php-extensions /usr/bin/
RUN install-php-extensions @composer http (and other extensions supported by the installer)
...
Not yet. There is only a drill to details table existing. See: https://github.com/apache/superset/tree/master/superset-frontend/src/components/Chart/DrillDetail
To send bulk SMS on WhatsApp, use Digivate IT WhatsApp Sender.
Just install the software, connect your WhatsApp by scanning the QR code, import your contact list (Excel/CSV), type your message, set the sending speed, and click Start Sending. The tool will deliver your WhatsApp messages in bulk with reporting, filtering, and anti-block features.
After a fresh install of VS 2026 I started encountering this issue, but only when I switched to Release mode. I had to go back into the VS Installer and:
Individual Components tab->Scroll down to Compilers, build tools, and runtimes
Then select the MSVC ### Build Tools you are building against
During my initial install I selected v141-v143 from the Desktop development with c++ tab, all of which should have had <filesystem> but for some reason they didn't install and it was defaulting to v140 despite me selecting ISO C++ 17 or ISO C++ 20 as the language standard.
Can't reproduce the result of your first command here. Expected duration is 10.08 which is what I get. Run with -report and share report.
protected $guarded = [];
do this and then try again
The four VAX floating-point formats (32-bit F format, 64-bit D format, alternate 64-bit G format, 128-bit H format) all have three classes of floating-point data, encoded by a sign bit, an exponent field, and a significand field:
On NetBSD/vax, the fpclassify() function has four possible return values for these cases:
FP_ROP is an example of a non-finite floating-point class other than infinity and NaN.
References:
Admin area -> Repository -> Allow developers to push to the initial commit
You should be able to make a copy of the CMake templates and replace the call to the usual Antlr Tool with a call to the antlr-ng tool.
I think I did that for my version of the CMake files in template form over here: https://github.com/antlr/grammars-v4/tree/61284ea7750274b996021b2b05fa003e9c173222/_scripts/templates/Cpp/cmake. For the default generator (i.e., the usual Java-based Antlr Tool 4.13.2), I replaced that with the "antlr4" Python wrapper, since it downloads Java as well as the .jar.
What OS? The Azure DevOps Server shares a lot of documentation with the cloud version (aka "Azure DevOps Services"), so you should follow articles like this to see how to prepare an agent for your build tasks.
When you work with a buffer, always use flush(), as it forces the data from the buffer to be written to the final destination. Not using flush on a buffer can cause the data not to be written to your file.
public void writeToFile(String fullpath, String contents) {
// Paths AṔI
Path filePath = Paths.get(fullpath, "contents.txt");
try {
// Files AṔI
Files.createDirectories(filePath.getParent());
} catch (IOException e) {
e.printStackTrace();
return;
}
// Files AṔI
try (BufferedWriter bw = Files.newBufferedWriter(filePath)) {
bw.write(contents);
bw.flush(); // <<--- Flush force write in file
} catch (IOException e) {
e.printStackTrace();
}
}
When working with files in Java, use the Paths and Files APIs.
This way, you don't need to worry about operating system issues.
BufferedWriter) because it ensures that the data in memory is actually written to the file.Reference:
@Ron Rosenfeld That is another good option. However, can tweak it more so that it do not display "Zero Dollar" or "Zero Cent" when there is no value? I am not proficient in python script.
@Cy-4AH SwiftPM does not support adding ".a" files as binaryTargets. But it supports "xcframeworks". And I have tried to build "xcframework" and attach it but with no success (but this was before success with absolute path's). But will try once more. Thanks for advice.
https://www.youtube.com/watch?v=6h1WGKJKxXI
explains very nicely how to deal with this problem.
Panda3D will fail to get access to any Graphics API (OpenGL, Vulkan and Direct3D) because repl.it online machines doesn't have any form of GPU. If you really wish to render graphics on it, I would recommend switching to an SDL based library (like pygame) which does not require a GPU since it only use the CPU. You may struggle to do 3D graphics thought.
multer using 'latin1' decode filename
if (!/[^\u0000-\u00ff]/.test(req.file.originalname)) {
req.file.originalname = Buffer.from(req.file.originalname, 'latin1').toString('utf8')
}
i run into similar problem, i think `planInputPartitions(start, end)` is supposed to be idempotent, it can be called multiple times. then how should i do things like receiving messages from sqs upon triggered? the latestOffset() is the place, we need to introduce a cache to hold the result.
may be git-filter-repo ? (and 17 more characters =))
Could you please provide a minimal reproducible example without any dependencies on things like axios and where you either define things like System.LinkTypes.Hierarchy-Forward or, even better, remove them? It would be very helpful if we could all test any possible suggestions by merely putting code in our IDEs and running it.
=LET(x0_,DROP(A1#,-1),x1_,DROP(A1#,1),IFERROR( (INDEX(x0_,,2)=INDEX(x1_,,2))*(DROP(x1_,,3)="IN")*(DROP(x0_,,3)="OUT")*(INDEX(x1_,,3)-INDEX(x0_,,3)),))
=LET(x_,COUNTA(A:A)-1,b1_,OFFSET(B1,,,x_),b2_,OFFSET(b1_,1,0),c1_,OFFSET(b1_,,1),c2_,OFFSET(c1_,1,0),d1_,OFFSET(b1_,,2),d2_,OFFSET(d1_,1,0),y_,IFERROR((d2_="IN")*(d1_="OUT")*(b2_=b1_)*(c2_-c1_),),y_)
sample file here
Just use temporal.io. It will eliminate 90% of complexity that event driven approaches require.
Does defining the range as A2:A instead of A2:A39 help?
https://www.reddit.com/r/C_Programming/comments/xm8f8e/why_doesnt_c_have_a_standard_macro_to_determine/
Be aware that a few CPUs have dynamic endianness :(
Could not load list
403 - Forbidden
Google Drive API has not been used in project 1059907167452 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/drive.googleapis.com/overview?project=1059907167452 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Check your credential
This was answered (in the negative) on the tox discussions.
https://github.com/tox-dev/tox/discussions/3648#discussioncomment-15067215
same issue happened to me right now ! any tips ? and how to access to logs ? i'm new to wp
I see. Thank you. I guess now that you explained it I sort of knew that. Thank you again.
Avoid bash, sed, awk et al. and use https://github.com/mikefarah/yq instead.
See https://unix.stackexchange.com/questions/646851/struggling-using-sed-command-with-variables.
@kikon
Final question. And once again, very grateful for walking me through this.
I think I understand everything except for this condition i - j - 1 < new_str.length as part of that double condition if statement**.** You explained the role of
i - j - 1 < new_str.length
as follows:
if -- i - j - 1 < new_str.length results in ignoring all the positions that are not in the initial string. So we can write the imaginary strings as '*bcddc?' and '**cddc??' where * stands for a character that is ignored for the palindrome test.How does i - j - 1 < new_str.length ignore all of the positions of the initial 6 character string, and by ignoring those those positions, such as
'*bcddc?'
does that mean that b is now index 0, c is index 1, and so forth?
type "git checkout master"
And "git status" in your remote repository to see the file status. See if some file has been modified or deleted.
Share
Improve this answer
Follow
If you want to be creative, you can try this library:
https://github.com/ggutim/natural-date-parser
Supports conversion of strings like "January 2, 2010" into java.time.LocalDateTime objects out of the box, without any configuration.
Lightsail bucket now support CORS configuration through AWS CLI:
Create a JSON file containing your CORS configuration. For example, create a file named cors-config.json with the following content:
{
"CORSRules": [
{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3000
}
]
}
Use the AWS CLI to apply the CORS configuration to your bucket:
aws lightsail update-bucket --bucket-name amzn-s3-demo-bucket --cors file://cors-config.json
Verify the CORS configuration was applied successfully:
aws lightsail get-buckets --bucket-name amzn-s3-demo-bucket --include-cors
please refer to: https://docs.aws.amazon.com/en_us/lightsail/latest/userguide/configure-cors.html
I disagree with the fellow user. XY questions are valid on SO and really do benefit the community.
Duplicate question mods.
Seen it before.
I asked how to do this for curiosity's sake more so than trying to solve a specific problem (the specific problem i was working with when this came to mind probably wouldn't have been a good use case for this anyways).
What debugging steps have you take with this code that only finds one index? Is venueBySku what you expect it to be? I don't think db.venueDB.values() returns what you think it does.
When using partitioned tables in PostgreSQL, SQLAlchemy does not require any special syntax. You query them the same way you would query a regular table. PostgreSQL handles the partition pruning automatically.
stmt = select(UserOrm).where(UserOrm.birth_year == 1990)
result = session.execute(stmt).scalars().all()
I believe I managed to figure it out. It seems that because I wasn't using the apply method in the render method to both viewports, they weren't functioning properly.
Python 2 reached EOL in 2020. Consider updating to Python 3.
The main problem happens when you filter the second dataset right here:
data = data_2_schule_schulform %>%
filter (Anzahl >= 3),
This subsets the data_2_schule_schulform object inside the geom_text call making it "misalign" with the data_2_schule_schulform inside the geom_bar() call just above it. Removing that and using the same ifelse logic you used before is the first fix. Second, you're passing fill into geom_text which is being ignored since it doesn't support it. You should be using group instead. The quick fix is, thus:
geom_text(
data = data = data_2_schule_schulform,
aes(
group = Schulform,
y = Anzahl * -1,
x = Schuljahr,
label = ifelse(
Anzahl >= 3,
comma(Anzahl, accuracy = 1L, big.mark = ".", decimal.mark = ","),
""
)
)
)
Part of the confusion with your example is because you're using two different datasets in a same plot. If possible, consider stacking the datasets into a single one: this would simplify it immensely.
That said, your code has several other problems you might consider reviewing:
guides(alpha = 'none') is doing nothing.theme(legend.position = 'none') and labs(fill = 'none') are doing the same thing.geom_bar() should be used when you want the height of the bar to represent the count of cases. If you want the heights of the bars to represent values in the data, use geom_col(). In other words, geom_bar(stat = 'identity') is the same as geom_col() which you should be using.comma function isn't actually doing anything since your numbers don't have decimals and the same goes for scale_y_continuous(labels = function(x) format(x, big.mark = ".")) since the numbers are all below 1000.size inside geom_hline is deprecated. Also, this horizontal line is actually making it hard to see the plot, consider removing it or making it smaller (e.g. linewidth = 0.8.I took the liberty to make some adjustments to create a general solution to your problem.
library(ggplot2)
library(scales)
library(dplyr)
data_schule_schulform <- structure(
list(
Schuljahr = c(
"2017",
"2018",
"2018",
"2019",
"2019",
"2020",
"2021",
"2021",
"2022",
"2023",
"2023",
"2024",
"2024",
"2024"
),
Herkunftsschulform = c(
"Gymnasium",
"Förderschule",
"Gymnasium",
"Förderschule",
"Gymnasium",
"Gymnasium",
"Gesamtschule",
"Gymnasium",
"Gymnasium",
"Gymnasium",
"Sonstiges",
"Förderschule",
"Gymnasium",
"Sonstiges"
),
Anzahl = c(7, 2, 2, 1, 6, 2, 1, 2, 4, 1, 57, 1, 8, 44)
),
class = c("tbl_df", "tbl", "data.frame"),
row.names = c(NA, -14L)
)
data_2_schule_schulform <- structure(
list(
Schuljahr = c(
"2017",
"2018",
"2019",
"2019",
"2019",
"2021",
"2022",
"2022",
"2023",
"2023",
"2023",
"2024",
"2024",
"2024",
"2024"
),
Schulform = c(
"Hauptschule",
"Hauptschule",
"Förderschule",
"Gymnasium",
"Hauptschule",
"Hauptschule",
"Gymnasium",
"Hauptschule",
"Förderschule",
"Gesamtschule",
"Hauptschule",
"Förderschule",
"Gesamtschule",
"Gymnasium",
"Hauptschule"
),
Anzahl = c(3, 1, 1, 1, 5, 3, 1, 4, 1, 1, 2, 1, 1, 1, 9)
),
class = c("tbl_df", "tbl", "data.frame"),
row.names = c(NA, -15L)
)
df_text_positive <- data_schule_schulform |>
mutate(
label = ifelse(
Anzahl >= 3,
comma(Anzahl, accuracy = 1L, big.mark = ".", decimal.mark = ","),
""
)
)
df_text_negative <- data_2_schule_schulform |>
mutate(
label = ifelse(
Anzahl >= 3,
comma(Anzahl, accuracy = 1L, big.mark = ".", decimal.mark = ","),
""
)
)
ggplot() +
geom_col(
data = data_schule_schulform,
aes(fill = Herkunftsschulform, y = Anzahl, x = Schuljahr)
) +
geom_text(
data = df_text_positive,
aes(
group = Herkunftsschulform,
y = Anzahl,
x = Schuljahr,
label = label
),
position = position_stack(vjust = 0.5),
size = 3,
color = "black",
fontface = "bold"
) +
geom_col(
data = data_2_schule_schulform,
aes(fill = Schulform, y = Anzahl * -1, x = Schuljahr)
) +
geom_text(
data = df_text_negative,
aes(
group = Schulform,
y = Anzahl * -1,
x = Schuljahr,
label = label
),
position = position_stack(vjust = 0.5),
size = 3,
color = "black",
fontface = "bold"
) +
theme_minimal() +
theme(
legend.position = "none",
axis.text.y = element_text(size = 8)
)
Unfortunately, I can't post the finished image due to my low reputation. But the code above should work for your case.
Not sure what you mean David tbh. But I’ve came hehe for a healthy discussion.
Please tell me if it is bad [...] it doesn’t work at all
Well, should it work at all? If so then that would pretty clearly imply some measure of "bad" if it doesn't do what it's intended to do.
Is the question you're asking really the question you meant to ask?
I’m facing the same getConfig issue while configuring React Native Config in React Native 0.78.0.
Close this as off topic please.
I can find one index using db.SKU but not sure how to or the correct code to find all the indexes
Using this code:
const venueBySku = db
.venueDB
.values()
.map((venue) => [venue.SKU, venue]);
const lookup = new Map(venueBySku);
const result = db.SKU
.filter(sku => lookup.has(sku))
.map(sku => lookup.get(sku));
console.log(result);
But how would I then add additional filters, i.e also check that the active and return round number.
Thanks.
My company is experiencing this same issue along with others as well. Let's get as many people as we can to upvote this support ticket and get a Meta engineer looking into this asap. If you have a direct connection with someone at Facebook, reach out so it can be escalated faster.
https://developers.facebook.com/community/threads/1581825999919516/
Fort standalone Spark see this example, do not connect to sc://192.168.2.5:15002 which is Spark connect port. If you want Spark connect, then you need to make sure the service is running.