To configure dependencies for hybrid web and mobile apps in a monorepo using tools like Turborepo, start by organizing your project into clearly defined packages—such as apps/web
, apps/mobile
, and packages/ui
. Use a shared package.json
to manage common dependencies, and install tools like Yarn Workspaces or npm workspaces for workspace linking. Separate platform-specific code when necessary, and keep reusable components in shared packages. Turborepo helps speed up builds and caching across projects. Maintain clean dependency boundaries, use aliases for module resolution, and configure build pipelines per app for efficient development across web and mobile platforms.
You can use this command :
git branch --format='%(upstream:short)' --contains HEAD
See the article: Resolving the “Namespace Not Specified” Error in Flutter Builds | by Derrick Zziwa | Medium
I recently built a command-line YouTube downloader in Python called **ytconverter**. It’s designed to fetch videos directly from YouTube and convert them into formats like MP3 and MP4. The goal was to make a simple, functional tool without external GUIs or bloated software — just clean CLI efficiency.
**Key features:**
- Download YouTube videos
- Convert to various formats (MP3, MP4, etc.)
- Handles basic metadata
- Easy to set up and use
If you're someone who likes working with Termux, CLI tools, or Python automation, this might be useful for you.
The project is open-source, so feel free to try it out, suggest improvements, or even contribute if you're interested.
**Here’s the repo:**
[https://github.com/kaifcodec/ytconverter](https://github.com/kaifcodec/ytconverter)
Let me know what you think! Suggestions, critiques, or PRs are all welcome.
Thanks!
npx tsc --init
{ "compilerOptions": { "module": "CommonJS", // other options... } }
Or for ES modules:
{ "compilerOptions": { "module": "ESNext", "moduleResolution": "node", // other options... } }
For ES modules projects, verify your package.json has "type": "module" added
Try running with the specific module flag
npx ts-node --esm src/index.ts
Or for CommonJS:
npx ts-node --commonjs src/index.ts
Another approach would be to use ts-node-esm explicitly:
npx ts-node-esm src/index.ts
In my case, i opened the pom.xml that contains all submodules references, right click, maven and then Sync projects. Automatically all spring boot services was added at services toolbox.
PS. IntelliJ Ultimate
I updated the langchain-community
library and everything is working fine now. You can fix the issue by running the following command:
pip install --upgrade langchain-community
@NicoHaase the problem is not in EasyAdmin, but in my limited knowledge) added working code:
/*some code*/
use Doctrine\ORM\Mapping\Entity;
use Doctrine\ORM\Mapping\HasLifecycleCallbacks;
use Doctrine\ORM\Mapping\PrePersist;
use Doctrine\ORM\Mapping\PreUpdate;
#[ORM\Entity(repositoryClass: ArticlesTagsRepository::class)]
#[HasLifecycleCallbacks]
class ArticlesTags
{
/*some code*/
#[ORM\PrePersist]
public function setDateCreateValue(): void
{
$this->DATE_CREATE = new \DateTime();
$this->setDateUpdateValue();
}
#[ORM\PreUpdate]
public function setDateUpdateValue(): void
{
$this->DATE_UPDATE = new \DateTime();
}
}
and it's worked! Thank's for info
Yes, running Android Studio in the cloud has traditionally required some workarounds, such as setting up a remote desktop or using a virtual machine with GPU support. However, Google has recently introduced studio.firebase.google.com — a fully managed, browser-based development environment that significantly simplifies this process.
This new platform is a game-changer for developers looking to leverage Android Studio in a server/cloud environment without compromising on performance or flexibility.
If someone wants to update the points field every time the polygon is modified, here is some code that works:
const polyWithRecalculatedPosition = {
points: getPoints(polygon),
flipX: false,
flipY: false,
scaleX: 1,
scaleY: 1,
angle: 0,
};
polygon.set(polyWithRecalculatedPosition);
polygon.setBoundingBox(true);
canvas.requestRenderAll();
function getPoints(poly: Polygon): XY[] {
const matrix = poly.calcTransformMatrix();
return poly.get('points')
.map(
(p: Point) =>
new Point(p.x - poly.pathOffset.x, p.y - poly.pathOffset.y),
)
.map((p: Point) => util.transformPoint(p, matrix));
}
Where "polygon" means polygon object that we want update. Work with moving, scaling, skewing, resizing and fliping.
For me, i think i would just use the Context.ConnectionAborted.ThrowIfCancellationRequested()
, put it in the first line of StartTesting
method, because the Context
holds data in every incoming request, if user stop connecting to the hub then it will automatically throw exception. You could try this.
I have encountered weird behavior with the answer here having the most upvotes
{{ variable|number_format }}
I would still get "a non well formed numeric value encountered" error randomly, it went away only when I explicitly stated zero decimal digits
{{ variable|number_format(0, '', '') }}
I had the exact same issue. After a deep investigation and debugging I managed to fix it.
The issue is because your object contains some special chars, so also payloads with emojis were failing (unicode). I'm using NestJS and first tried to do all kinds of stuff with Buffer, different stringify packages, even checked byte-by-byte comparison. Everything failed, except normal text messages.
So my solution was (at least in NestJS), to make sure you have the raw body. In my case, I add it to the request via the main.ts
:
import { json } from "body-parser"; // make sure to install this
app.use(
json({
verify: (req, res, buf) => {
req.rawBody = buf.toString("utf8"); // Store raw body for signature verification
},
})
);
Then you can get the raw body from the request with the @Req decorator in the controller:
@Req() req: Request & { rawBody: string }
In the end, create the expected signature and compare it to the Meta signature:
const expectedSignature = `sha256=${crypto
.createHmac("sha256", this.metaCfg.metaAppSecret)
.update(rawBody)
.digest("hex")}`;
// ...compare with Meta signature
NodeJS is open-source, cross-platform JavaScript runtime environment that run on V8 engine, it allows you to create servers, web apps, command line tools , API's and scripts which can be used for backend application development.
ReactJS is a frontend Javascript framework used for developing web application, DOM manipulation is done for rendering the HTML to the frontend.
So you need both for a complete web application development , NodeJS for Backend to execute the business logic and get the data from the database in the form of API's and ReactJS for Frontend which calls the NodeJS API's and display the data to the frontend using data binding to the HTML.
I just had the same issue. And finally it was related to the incorrect name of the resource file to instance
new System.Resources.ResourceManager("BizTalk.Core.PipelineComponents.PromoteWCFAction.PromoteWCFAction", Assembly.GetExecutingAssembly());
In WSL Ubuntu, look for file /etc/neo4j/neo4j.conf and un-comment the following line:
server.default_listen_address=0.0.0.0
Save the file and restart neo4j.
May be something like https://pub.dev/packages/pda_rfid_scanner can help with that?
I had a similar problem, and while trying out the many and diverse possible solutions offered here, I ended up with a much bigger problem: My screen resolution is suddenly stuck at 800 x 600. So I started searching for solutions to this new problem, and they are similarly many and diverse, and none of them are working. Has anyone else had this happen while doing anything described in this thread?
If you replace this line of code in your second script:
timesteps, nx, ny = hraw.shape
with this example data (you have to use your own!)
timesteps, nx, ny = 174, 200, 50
hraw = np.random.rand(timesteps, nx, ny) # Example-horizontal data
uraw = np.random.rand(timesteps, nx, ny) # Example-speed data
Looks it what you search for?
Yes downgrade to 16.2 works for me : https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_16.2/Xcode_16.2.xip
I think the output:ASPM L0s L1 stands for the device support L0s and L1.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-server-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
</dependencies>
After I modifying the dependency, it was successful
I stumbled this question today, Sharing my answer to help anyone else who comes across same issue.
networkMode should be changed from awsvpc to bridge in task-defintion. This would allow ports to get mapped from host to container. You can verify by issuing command docker ps - it should show ports column mapped. If you are still not able to access the application check your security group settings.
Following ChatGPT's advice, I implemented the PKCE flow for authentication while letting the server get the code at the last stage, and ask for the token itself to register the user in my DB. All while the Nuxt proxy is in place, and forwards API requests.
This worked out, and my authentication now works for Brave and Safari. However, the proxy led me to other issues. Due to the game nature of my app which requires websocket connections and fast interactions, that is hindered by the proxy. So I'm going to seek a solution to bring my API and frontend to the same domain.
Not getting this question answer i also have same problem
Choosing the right technology can define your project's success. you can check below blog for more details. https://multiqos.com/blogs/nodejs-vs-reactjs/
HI can u give me the sample code I am getting error while performing it
Validation error. error 80080204: App manifest validation error: Line 40, Column 12, Reason: If it is not an audio background task, it is not allowed to have EntryPoint="BGTask.ToastBGTask" without ActivatableClassId in windows.activatableClass.inProcessServer.
pip install openai==1.55.3 httpx==0.27.2 --force-reinstall
This installation solves error for me.
I am on maui 9.0 SR6, still have the same issue. But the issue filed on the maui reposotory is closed and locked. After some investigating, I found I have to remove the "CFBundleShortVersionString" from "Platforms/iOS/Info.plist", then everything works as expected. This behaviour is not documentd here: https://learn.microsoft.com/en-us/dotnet/maui/ios/deployment/publish-cli?view=net-maui-9.0
<key>CFBundleShortVersionString</key>
<string>x.x.x</string>
The issue is likely in how you're establishing the WebSocket connection to the Rust backend. When using ws in Node.js environment (your Next.js API route), the headers need to be passed differently than what you have in your code.
const socketRust = new WebSocket(wsUrl, { headers: { api_key: token, }, });
But the ws package in Node.js expects headers to be passed differently when used on the server side.
Try modifying your code like this:
const socketRust = new WebSocket(wsUrl, { headers: { 'API-Key': token, // Make sure case matches what your Rust server expects } });
If that doesn't work, another approach is to use the request option:
const socketRust = new WebSocket(wsUrl, { rejectUnauthorized: false, // Only for development! headers: { 'API-Key': token, } });
Also, check if your Rust backend expects the header to be "api_key" or "API-Key"
Unfortunately, they removed that feature from the C# Dev Kit extension.
https://github.com/dotnet/vscode-csharp/issues/8149#issuecomment-2787688204
Maybe they will return it with https://github.com/dotnet/vscode-csharp/pull/8169
When i uninstall tensorflow 2.15.0 and keras 2.15.0 and then first install tensorflow and then install keras the problem resolved. Maybe first install keras will cause tensorflow.keras not install properly?
Please disregard..error was pretty clear..upgrading commons-io to latest version fixed the issue
in case of updating to puppeteer 23 https://github.com/puppeteer/puppeteer/issues/13209#issuecomment-2428346339
Yes, it works perfectly fine on the real phone.
The problem was with the emulator, specifically the build architecture was not fully supported by the emulator and the corresponding message was in the output which I was ignoring all the time :(
In AngularJS, you can pass parameters through routes using the $routeProvider
service by including a colon (:
) followed by the parameter name in the URL path.
$routeProvider
.when('/user/:userId', {
templateUrl: 'user.html',
controller: 'UserController'
});
In this example, :userId
is a route parameter. You can access it in your controller using $routeParams
:
app.controller('UserController', function($scope, $routeParams) {
$scope.userId = $routeParams.userId;
});
This allows you to dynamically handle data based on the URL, such as loading a specific user’s profile.
If you're developing or maintaining AngularJS applications and need expert help, check out our AngularJS development services to learn how we can support your business with scalable, clean, and efficient solutions.
Have you solved the problem? I encounter the same issue.
As of today (April 2025), VSCode now includes built-in Git Blame support — you just need to enable it in the Settings.
https://github.com/microsoft/vscode/issues/205424#issuecomment-2504143954
Hey did you get any solution regarding this ?
Your current project seems to be upgraded from an older Teams Toolkit project. You could download the latest project from https://github.com/OfficeDev/teams-toolkit-samples/tree/dev/bot-sso and see whether it works.
when set value to variable , but it doesn't work
видимо никак, либо тут недостаточно опытные разработчики сидят
Yes this is expected behaviour
only one VirtualService
can be applied per host, and if there are multiple VirtualService
resources for the same host, Istio will pick one arbitrarily, leading to unpredictable behavior
You must combine the main and feature routing logic into a single VirtualService
per host.
Have you got the solution? 'cause I am also stuck in a similar situation
It seems that your protobuf version is incompatible with TensorFlow 2.0. You can check if you're using version 3.6.0. If not, try running:
pip uninstall protobuf
pip install protobuf==3.6.0
Then run again your code
I opened square brackets instead of curly braces as my useEffect
function
If you don’t want to change the global php.ini
, add this line at the top of the controller method or inside your route:
ini_set('max_execution_time', 300); // 5 minutes
Eg:
public function longRunningTask()
{
ini_set('max_execution_time', 300);
// Your heavy process here
}
If your using angualr18 version then Please use *@microsoft/signalr:"8.0.0"**version.I migrated my project from 14 to 18 which inclues module federation also.I thought it was federation issue when I used @microsoft/signalr:"8.0.7" when I user @microsoft/signalr:"8.0.0" then issue resolved.
I am trying to pass list of columns into the dataflow and do a column match but it's throwing the error -
Column functions are not allowed in constant expressions
for this I tried to work around first of all there are few things which you need to cross check few things.
byName()
is a column function, and $columnList
is a parameter (constant expression) — and ADF doesn’t allow mixing these in a direct boolean expression inside the column pattern rules or derived column mappings. Below is the error you are getting when I tried to do the same way you did.
First cross check your dataset connection is successful. Make sure you checked first row as header.
Then in the derived column use the below expression.
You can check the output.
The error is in the following line:
elif message.text.lower() == 'hello' or 'hi':
This statement is always true as @Klaus has pointed out. This is coz message.text.lower() == 'hello'
and 'hi'
are two separate conditions. The second one(‘hi’
) is always true since it’s just a string and the Boolean value of a non-empty string in Python is True. To make the statement work, change the aforementioned line to:
elif message.text.lower() == 'hello' or message.text.lower() == 'hi':
The solution that solved my issue was not only setting logfile as empty string (logfile "") in redis.conf, but also setting the daemonize field to no (daemonize yes --> daemonize no) in redis.conf as well.
I've also encountered a similar issue. Upon troubleshooting, I discovered that the Background Intelligent Transfer Service (BITS) had stopped. Once I restarted this service, my problem was resolved.
In the system where you wish to generate these JSONs, do you have access to the terraform
binary itself, and are you able to include the terraform.io/builtin/terraform
provider to your configuration? If yes, you could do something elegant, like this:
First add the terraform
provider:
terraform {
required_providers {
terraform = {
source = "terraform.io/builtin/terraform"
}
}
}
Then use this to get your JSON, which you can then pipe to a file as required:
echo 'jsonencode(provider::terraform::decode_tfvars(file("${path.module}/PATH-TO-FILE.tfvars")))' | \
terraform console | \
jq -r | \
jq
The usage of jq
is optional - it is used twice as the output of terraform console
is a stringified JSON, so the first jq -r
converts the string to a parseable JSON.
This makes use of the decode_tfvars
function provided by terraform
, which converts the contents of a .tfvars
file to a terraform object.
i found the blob image and it still just plain text, can someone helps me :<
I managed to comprehensively break Python and grub in my install of Mint 22 due to first messing with various Python versions, then attempting to update grub in order to give me a text-only login. The machine simply wouldn't boot following this.
Booting from a live USB image and chrooting to reinstall/update grub would not work as the Python version was different, removing all Python links in update-alternatives further broke it.
This post gave me enough clues to restore from the live login without having to completely re-install, (thanks @maxirodr) however I found it necessary to make the following minor changes:
Step 1 - as posted (but with the requisite version of Python):
apt-get download libpython3.12-minimal
apt-get download python3.12-minimal
apt-get download python3-minimal
apt-get download libpython3.12-stdlib
apt-get download python3.12
Step 2 - I did not remove the existing python3.12 directory but did clear all python versions in update-alternatives. This may or may not be a good idea for anyone reading this, _be careful_:
update-alternatives --remove-all python3
hash -r # removes cached python3 binary path
Step 3 - just directly install the packages
dpkg -i libpython3.12-minimal_3.12.3-1ubuntu0.5_amd64.deb
dpkg -i libpython3.12-stdlib_3.12.3-1ubuntu0.5_amd64.deb
dpkg -i python3.12-minimal_3.12.3-1ubuntu0.5_amd64.deb
dpkg -i python3.12_3.12.3-1ubuntu0.5_amd64.deb
dpkg -i python3-minimal_3.12.3-0ubuntu2_amd64.deb
Step 4 - not required
Step 5 - as posted (although all I did was to check I got the python prompt ok)
Step 6 - not required
Steps 7-9 - as posted.
I was then able to update grub correctly from the live login and boot back into the machine proper.
hey guys i cant get my legend to be just a little smaller right now it is the size of africa can you help me please my assignment is due tomorrow
github is not accepting normal username-password login. What it needs is username and api token instead of password. Go to github website's settings and issue an api token that needs to be inserted on the password prompt.
I've been trying to find out if it's possible to parameterize the linked service in an Azure Data Factory (ADF) dataset, similar to how we can parameterize the table name and schema name in a SQL Server dataset. Is there a supported way to do this?
You cannot directly parameterize the linked service inside a dataset,but we can achieve dynamic linked service selection by creating a parameterized linked service. Creating a dataset that passes parameters to that linked service.
Add the parameters as well. and test the connection.
When you create a dataset Create it using the parameterized linked service Now it will ask you to add value for server name and database name. When you assign this dataset in a pipeline activity, ADF will ask you to pass values for the linked service parameters
I found the cause of issue and fix as well.
The problem was while I was running kubectl command, the kubeconfig was trying to generate the access token on the fly for which ssm was not having required access.
As soon as I hardcoded the token value in script, it started working from system manager run command as well.
Now the only problem I had is token expires every 15 min, which I can update every time I run the script, within the script itself.
Hope this helps!!
These three commands work for me too, but shutdown and restart the system.Thanks!
It looks like the stylesheet loaded from javascript (or maybe iframe - if exists).
To reproduce the source loaded, you can do the following:
You could find what caused css to loaded as is and trace it
I feel it's important to step in here, speaking as a former PokerStars Game Integrity specialist. First and foremost, let's clarify and not confuse a few key concepts:
Hands obtained from your own play — These are private and belong solely to the participants of the game. While observers may occasionally witness a portion of this data (typically showdowns), the complete hand histories are strictly limited to those who actually played the hand.
Datamined hands — These are hand histories collected via third-party tools or unauthorized means, often without the consent or knowledge of the players involved. In some cases, these hands are sold by players themselves or scraped through spyware or illicit software. Sites like PokerTableRatings (PTR) were known for dealing in these kinds of hand histories — a practice SharkScope has zero involvement with, now or ever.
Tournament Results — These are public records, comparable to sports results published in newspapers (e.g., basketball or baseball box scores). For example:
1st Place — Player XXX — $400
2nd Place — Player YYY — $350
3rd Place — Player ZZZ — $290
These leaderboards are generated and displayed after each tournament and are intended to be transparent, both for the benefit of the player community and for maintaining game integrity. (And yes — boos to GGPoker for choosing to obscure this essential information.)
It’s crucial to understand the core difference between SharkScope and sites like PokerTableRatings (PTR).
SharkScope's primary focus is — and always has been — the aggregation and analysis of public tournament results, not individual hand histories. The site provides tools that help players evaluate long-term performance, study profitability trends, and detect patterns that could suggest suspicious or unfair play (e.g., collusion, chip dumping, or multi-accounting). This is accomplished through statistical analysis of tournament placements and winnings, all sourced from public leaderboards.
In contrast, PokerTableRatings (PTR) operated by harvesting and distributing hand histories, often gathered via questionable or outright unauthorized methods. This kind of data collection raised serious ethical and privacy concerns and was a clear violation of many poker platforms' terms of service.
To summarize:
SharkScope = public tournament result tracking, performance analytics, and integrity-focused tools.
PTR = unauthorized hand history harvesting and resale.
It's essential not to confuse the two, as their methods, purposes, and ethical standings are fundamentally different. SharkScope is built on transparency and the responsible use of publicly available data to improve both player knowledge and game integrity.
Cheers!
I found a solution
If you add a @Sendable the warning will disappear
func foo()async throws {
let ctx = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType);
//..
try await ctx.perform{ @Sendable in // warning disappear
if ctx.hasChanges {
try ctx.save();
}
}
}
Once you've deployed a new version of your assets your can invalidate those cached files. This can be done by specifying the exact file name and path or using the *
wildcard to match multiple files in a directory. Do note, the *
wildcard only works at the end of the string and is taken as a literal when used anywhere else.
im using this timer hope my comment will help other who need https://tempmailusa.com/10minutetimer/
I am facing same issue. I have a script placed on a server. Server already has kubectl and aws cli installed.
WHEN SCRIPT IS EXECUTED WITH AWS SSM
script runs eks update kubeconfig and then kubectl command, which fails with below error:
ERROR-------
E0417 15:54:14.627818 31772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
WHEN SCRIPT IS EXECUTED DIRECTLY FROM SERVER, IT PASSES THROUGH.
Note: the user in both case is root that is checked with whom.
Please help me if you found a solution.
I don't have enough reputation to add a comment to VAS's comment, so I have to add "an answer" here to remind people reading this page.
The kubectl proxy
and the kubectl port-forward
don't work the same way.
To be short, kubectl proxy
requires the Kube-apiserver access resources like pod/node/svc via clusterIP.
And port forward
requires the Kube-apiserver coordinates with kubelet to forward the traffic.
Such difference will cause different user experience in some Serverless K8s Distribution.
you can import jwt for that, it helps guard the user and admin differently
<style>
#shownumber {
border-radius: 50%;
border: 5px solid #F10A0E;
color: #F10A0E;
font-size: 30px;
width: 80px;
text-align: center;
position: absolute;
margin-top: -25px;
}
label {
font-size: 17px !important;
}
.hiclass {
background-color: rgb(51, 144, 255);
color: white
}
</style>
<h5 class="text-left battambang text-bold">{{ $campus ?? 'គ្រប់សាខាទាំងអស់' }}</h5>
<h5 class="text-center muol">ចំនួននិស្សិតចុះឈ្មោះប្រចាំថ្ងៃ</h5>
<h6 class="text-center battambang text-bold">គិតត្រឹមថ្ងៃទី:{{ dateFormat($from_date) }} ដល់ {{ dateFormat($to_date) }}</h6>
<h6 class="text-center battambang text-bold">ថ្ងៃបោះពុម្ភ: {{ date('d-m-Y h:i A') }}</h6>
<h6 class="text-center battambang text-bold" style="text-align: left !important;">System:0025</h6>
<table class="table-report table-report-boder">
<div id="main_chart" style="height: 500px; width: 100%; margin-top: 30px;"></div>
</table>
<table align="center" width="98%" style="font-size: 14px; line-height: 25px; margin-top: 20px;">
<tbody>
<tr style="text-align: center;">
<td width="25%"></td>
<td width="20%"></td>
<td width="70%" nowrap="nowrap"><span id="khmer-lunar-date"></span></td>
</tr>
<tr style="text-align: center;">
<td></td>
<td></td>
<td>រាជធានីភ្នំពេញ, @khmer_date(date('Y-m-d'))</td>
</tr>
<tr style="text-align: center;">
<td>បានឃើញនិង ឯកភាព</td>
<td>បានពិនិត្យ</td>
<td>អ្នកធ្វើតារាង</td>
</tr>
</tbody>
</table>
<script>
$(document).ready(function () {
var chartDom = document.getElementById('main_chart');
var chart = echarts.init(chartDom);
var labels = @json($dailyLabels).map(function(dateStr) {
return convertDate(dateStr);
});
var option = {
title: {
text: 'ចំនួននិស្សិតចុះឈ្មោះប្រចាំថ្ងៃ',
left: 'center',
top: 10,
textStyle: {
fontSize: 18,
fontFamily: 'Khmer OS Muol'
},
subtextStyle: {
fontSize: 14,
fontFamily: 'Khmer OS Battambang'
}
},
tooltip: {
trigger: 'axis'
},
xAxis: {
type: 'category',
data: labels,
},
yAxis: {
type: 'value',
name: 'និស្សិត'
},
series: [{
name: 'ចំនួននិស្សិត',
type: 'bar',
data: @json($dailyCounts),
barWidth: '50px',
itemStyle: {
color: '#3398DB',
borderRadius: [5, 5, 0, 0]
}
}]
};
chart.setOption(option);
});
function convertDate(dateStr) {
const parts = dateStr.split('-');
let year = parts[2];
if (year.length === 2) {
year = '20' + year;
}
parts[2] = year;
return parts.join('-');
}
$(document).find('#khmer-lunar-date').html(khmerDate().toLunarDate());
You can help me.
Print chart
In Pydantic v1 the method was called .dict(), it was deprecated (but still supported) in Pydantic v2, and renamed to .model_dump().
The examples here use .dict() for compatibility with Pydantic v1, but you should use .model_dump() instead if you can use Pydantic v2.
finally I find the answer. When using B1 call atoms, you dont need to add the BOM, BO and AdminInfo tags. just started from the "<Document>" will works.
The most common cause is the authentication token expiration. Snowflake's tokens expire after a set period. If that is the case, edit your connection in Looker Studio and re-authenticate with your Snowflake credentials. Consider using service accounts for a permanent fix, creating a dedicated service user in Snowflake, granting the appropriate permissions, then using those credentials in Looker Studio.
Another possibility is that Snowflake is blocking Google's IP ranges, so check your network policies in Snowflake and add Google's IP ranges to your allowlist.
If your queries are complex, they might be timing out, so optimize your queries, consider creating materialized views in Snowflake, or increase the timeout parameter.
Configuration info for Safari 18. The Safari menu includes an additional Settings menu item for the page. When this menu item is selected, a window is rendered that allows you to configure the Auto-Play. When Auto-Play is set to Allow All Auto-Play, the js will play.
The relevant part is this:
DCMAKE_C_COMPILER=C:/msys64/mingw64/bin/gcc.exe
And
DCMAKE_CXX_COMPILER=C:/msys64/mingw64/bin/g++.exe
You need to confirm and correct these paths.
If you would like the row(s) affected and other statistics displayed as a table instead of a list, and are struggling with the fact that the statistics object is a dictionary, here is how I converted and displayed it with a little help from https://stackoverflow.com/a/18495802/2260616
Note the list of 18 statistics available at https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/provider-statistics-for-sql-server
after calling Invoke-Sqlcmd with -StatisticsVariable "sqlcmd_statistics_dictionary"
$sqlcmd_statistics_object = ($sqlcmd_statistics_dictionary `
| %{New-Object PSObject -Property $_})
Format-Table -InputObject $sqlcmd_statistics_object -AutoSize -Property `
IduCount, IduRows, SelectCount, SelectRows, SumResultSets, `
Transactions, BuffersSent, BytesSent, BuffersReceived, BytesReceived
Format-Table -InputObject $sqlcmd_statistics_object -AutoSize -Property `
ConnectionTime, ExecutionTime, NetworkServerTime, ServerRoundtrips, `
CursorOpens, PreparedExecs, Prepares, UnpreparedExecs
The results will look like:
the mailchimp's standard API endpoint, such as POST /lists/{list_id}/members, require an email_address field, making it unsuitable for adding SMS-only contacts.
Adding SMS-Only Contacts:
In order to add contacts without email addresses, you can import them via a CSV or TXT file. This will allow you to include only phone number for SMS marketing. (https://mailchimp.com/help/set-up-your-sms-marketing-program/) (https://mailchimp.com/help/about-sms-marketing/)
You can follow these steps to import SMS contacts:
create a CSV or TXT file containing the phone numbers of your SMS subscribers. (https://mailchimp.com/solutions/sms-marketing-tools/)
You need to also make sure that you have consent from those contacts to receive sms messages. (https://mailchimp.com/solutions/sms-marketing-tools/)
then you import your contacts using MailChimp's import tool to upload your file and add the SMS contacts to your audience.
Also, before you import sms contact, you need to set up an SMS Marketing program in Mailchimp. (https://mailchimp.com/help/use-send-sms-actions/)
Workaround Using Placeholder Emails:
using placeholder email addresses to bypass the email requirement is not really recommended because it may violate Mailchimp's terms of service. (https://mailchimp.com/developer/transactional/docs/fundamentals)
The image_picker package needs two additional packages, image_picker_android and image_picker_platform_interface. Then you can continue using ImagePicker().pickMultiMedia() on both Android and iOS.
setup the main.dart file:
void main() {
final ImagePickerPlatform implementation = ImagePickerPlatform.instance;
if (implementation is ImagePickerAndroid) {
implementation.useAndroidPhotoPicker = true;
}
...
}
@Barry's suggestion of `using` seems a reasonable solution to me, but in the end doesn't really save much given how simple a (non-template) operator<< for S would be.
I like the idea of a base class used just to guide ADL to the right answer. I'll have to see if that is feasible in our real codebase.
The problem with putting the template in global namespace (or in N) is that works.... but only sometimes. Because this is relying on normal, non-ADL lookup, it is subject to shadowing. So if _any_ other operator<< is visible in N::N1 namespace the one in N (or global) namespace is hidden. This is very fragile and working code can be broken by completely unrelated changes, and the failure can be very context dependent (i.e. works for most usages but fails if some unrelated type N::N1::C, which happens to have an operator<< , is visible.)
As to the original question of adding to std:: namespace, it might be argued that this case might be counted under this clause (from cppreference)
It is allowed to add template specializations for any standard library function template to the namespace std only if the declaration depends on at least one program-defined type and the specialization satisfies all requirements for the original template, except where such specializations are prohibited.
as std::operator<<(std::basic_ostream<C, Trait>&, T)
is already a template (for at least some types, in at least some implementations) and we are adding a partial specialisation of that.
When the Controller try to connect to database, some error ocurred. You can enable retry, if it is not enabled yet.
If it is already enabled, then there is a problema with your connection to your db instance. You need to track and diagnose where is the failure (network, authentication, and so on).
yo tengo un problema similar pero aca es con sfml para c++ y los archivos no venian con su controlador de graficos
So have u fixed that?
Напиши мне в тг по @codeem, давай замутим это дело в вебе. И ещё добавим pwa, чтобы можно было генерить картинки в вебе одной командой.
def mac_fmt(bssid):
return "{:02x}:{:02x}:{:02x}:{:02x}:{:02x}:{:02x}".format(*bssid)
for ssid, bssid, channel, RSSI, authmode, hidden in ap.scan():
mac = mac_fmt(bssid)
print(f"BSSID: {mac}")
buenas necesito ayuda
como hago dos histogramas cuando tengo una variable con dos niveles que son lugar 1 y lugar 2, pero estos estan en una sola columna . cuando realizo el histograma me toma todos los datos , no los discrimina por lugar. mi variabla respuesta es contenido de vit C
nota:los niveles de lugar estan uno a continuacion de otro ,
mi pregunta es como le digo a R que tome los datos del lugar 1 para el histogrma
y tome los datos del lugar 2 para el otro histograma
gracias por tu respuesta
I have the same question just like you said. Have you solved it? could you share the idea, really thanks!
If you want to prevent the last slide from being cut off, you should not set any gap or spacing by yourself in the slide-wrapper. use SlidePerView and SpaceBetween.
public class Config implements WebMvcConfigurer
WebMvcConfigurer is not deprecated.
I produced the usable code following your question:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
@Configuration
public class Config implements WebMvcConfigurer {
@Value("${allowed-origins:*}")
String allowedOriginsPattern;
// contains "http://localhost:8080,http://127.0.0.1:8080,http://[::1]:8080"
@Override
public void addCorsMappings(CorsRegistry registry) {
String[] origins = allowedOriginsPattern.split(",");
registry.addMapping("/**")
.allowedOriginPatterns(origins)
.allowedMethods("GET", "OPTIONS", "POST")
.allowCredentials(true);
}
}
In the application.yml, I added:
server:
servlet:
context-path: /
allowed-origins: "http://localhost:8080,http://127.0.0.1:8080,http://[::1]:8080"
An incorrect context-path
could be a reason for a CORS problem.
The example repository was here.
To try, after starting Spring Boot, please visit in your browser:
http://localhost:8080
http://127.0.0.1:8080
http://[::1]:8080
Example output in the browser:
CORS
GET http://127.0.0.1:8080/api
GET request page
POST http://127.0.0.1:8080/api
POST request page
GET http://localhost:8080/api
GET request page
POST http://localhost:8080/api
POST request page
GET http://[::1]:8080/api
GET request page
POST http://[::1]:8080/api
POST request page
In the example, the JavaScript I used:
<script>
async function getter(uri, elementId){
const responseGet = fetch(uri, { "method": "GET" })
.then(response => {
if (!response.ok) {
return response.text()
.catch(() => {
throw new Error(response.status);
})
.then(({message}) => {
console.log(message);
throw new Error(message || response.status);
});
}
return response.text();
});
const responseGetText = await Promise.resolve(responseGet);
const elementGet = document.getElementById(elementId);
elementGet.innerHTML = elementGet.innerHTML + "<div>" + responseGetText + "</div>"
}
async function poster(uri, elementId, body){
const responsePost = fetch(uri, { "method": "POST", body: body })
.then(response => {
if (!response.ok) {
return response.text()
.catch(() => {
throw new Error(response.status);
})
.then(({message}) => {
console.log(message);
throw new Error(message || response.status);
});
}
return response.text();
});
const responsePostText = await Promise.resolve(responsePost);
const elementPost = document.getElementById(elementId);
elementPost.innerHTML = elementPost.innerHTML + "<div>" + responsePostText + "</div>"
}
async function main(){
await getter("http://127.0.0.1:8080/api", "getipv4");
await poster("http://127.0.0.1:8080/api", "postipv4", JSON.stringify({"user": "user"}));
await getter("http://localhost:8080/api", "getlocalhost");
await poster("http://localhost:8080/api", "postlocalhost", JSON.stringify({"user": "user"}));
await getter("http://[::1]:8080/api", "getipv6");
await poster("http://[::1]:8080/api", "postipv6", JSON.stringify({"user": "user"}));
}
main();
</script>
For a recent version of jQuery:
$("#slider").slider('option', 'value')
returned zero for a double slider version. This returned a two element array with the proper values:
$("#slider").slider('option', 'values')
Note that value is plural.
The correct answer from @jonrsharpe:
"how TypeScript obtains the original DOM document object from the browser" - it doesn't. Those interfaces are only used to make sure you're doing the right thing at compile time. At run time, you have regular JavaScript accessing the global document object.
you can try to use Group function
= Table.Group(Source, {"Type"}, {{"ID", each Text.Combine([ID],",")}})
you can try to create a measure
Measure = if(max('Calendar'[Date])>=today(),1)
add this measure to visual filter and set to 1
nicee mantap kali sangat memuaskan
If you're reading this in 2025, Cloud Run now has the "Send traffic directly to a VPC" feature.
Actually, when u 'enable platform', there is no option to save. So, changed settings are not saved, and u are back to square one - meaning u are unable to share, & get msg
Andrew Kin Fat Choi 's answer helped!
the API spec of the Azure DevOps - Approvals And Checks - Check Configurations
is here.
A sample payload looks like this. Seems like your approver object is slightly different.
POST https://dev.azure.com/{organization}/{project}/_apis/pipelines/checks/configurations?api-version=7.1-preview.1
{
"settings": {
"approvers": [
{
"displayName": null,
"id": "3b3db741-9d03-4e32-a7c0-6c3dfc2013c1"
}
],
"executionOrder": "anyOrder",
"minRequiredApprovers": 0,
"instructions": "Instructions",
"blockedApprovers": []
},
"timeout": 43200,
"type": {
"id": "8c6f20a7-a545-4486-9777-f762fafe0d4d",
"name": "Approval"
},
"resource": {
"type": "queue",
"id": "1",
"name": "Default"
}
}
refer to this link for all field specs.
https://learn.microsoft.com/en-us/rest/api/azure/devops/approvalsandchecks/check-configurations/add?view=azure-devops-rest-7.1&tabs=HTTP
The OP asked for string interpolation similar to Python f-string. The goal is that a user only needs to enter a single string to define a template, but within that string there is specially marked "placeholder" to indicate where to substitute a value. The placeholder can be a key enclosed in curly braces, e.g. {TICKER}
. Google Sheets didn't have such a formula.
Fast forward to 2025, Sheets still doesn't have it, but does offer array-based formulas and iteration mechanism like REDUCE
. Combined with REGEXREPLACE
, we can define a Named Formula that simulates simple interpolation.
Formula. The formula below takes three parameters:
template
: a string that contains placeholders enclosed in curly braceskeys
: a string to define the placeholder key (or single-row array if multiple placeholders)values
: a value corresponding to the key (or single-row array if multiple placeholders)Create a Named Function TEMPLATE
with these three parameters, so named and in that order, then enter the formula definition:
= REDUCE( template, keys, LAMBDA( acc, key,
LET(
placeholder, CONCATENATE( "\{", key, "\}" ),
value, XLOOKUP( key, keys, values ),
REGEXREPLACE( acc, placeholder, TO_TEXT(value) )
)
) )
Example with one placeholder. Your example uses a custom Apps Script function called ImportJSON, but your question is more about the string interpolation, so I will just focus on how to generate the URL based on the value of A2 (the cell containing string "BTC"). In a cell enter:
= TEMPLATE(
"https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids={TICKER}",
"TICKER", A2
)
The result should be https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=BTC
Example with multiple placeholders. You can give arrays as the keys
and values
arguments for multiple placeholders:
= TEMPLATE(
"My name is {name} and I am {age} years old.",
{ "name", "age" }, { "Lan", 67 }
)
Confirm the result is: My name is Lan and I am 67 years old.
How it works? The REDUCE
formula takes the string template
as an initial value, and iterates through the values of keys
. In each iteration:
"\{name\}"
."Lan"
.REGEXREPLACE
on the template to match all instances of the placeholder and replace each one with the value."My name is Lan and I am {age} years old."
) to the next iteration.Named Function. If you just want to use this, you can import the Named Function TEMPLATE
from my spreadsheet functions. See the documentation at this GitHub repo for more details.
There is a library that do just that. And also provides configurations for retries and refreshes for resilience.
Reviving this topic because the Extension system in Blender 4.2 provides a proper solution to this problem, which involves packaging .whl files (which you can download from PyPI) with your extension, and then mentioning that .whl file in the extensions' manifest file. Documentation here.
It's also possible to pull this off pre-Blender 4.2, by implementing your own code to import .whl files, which is easiest to do by simply stealing this code from Blender Studio
I've just tested both methods and they work great, unlike any other solution I've tried, which either relies on deprecated pip function calls, or they install the modules in the global python environment no matter what you do, which isn't available from inside Blender, or they rely on launching Blender from a venv, which is obviously silly.
all the data "in" databricks is stored wherever you say you want it stored. Databricks can connect with and manage via unity catalog multiple sources.
It sounds like you just want to make sure that all your storage is in your same azure tenant which is the basic standard setup youd have a storage blob/server/hyperscale etc etc in azure that is associated with your tenant and that is the storage that databricks would.
Azure Databricks is not a storage account - your storage will be self directed in whatever capacity you choose, obviously, chosing to store on an azure storage account makes sense here, as does having that storage account in the same azure tenant as the databricks environment
why dont you use the global call, which is what i usually use http://10.0.2.2:<port>